Trey

Today’s data platforms collect large amounts of data. This data often includes both internal content (found in files, databases, and other internal systems) and public content (such as products, articles or other web content). When we want to make this data easily available to users, we provide a search box. If you think of all the websites and apps you interact with daily, a search box is likely one of the first places you go. Those searches and the follow-up interactions with the search results serve an increasingly important yet often poorly used data source for improving the digital experience.

User interactions include things like searches, clicks, views, purchases, bookmarks, likes and reposts of content. We call these user interactions “signals,” as they signal a user’ intent and interests. Modern data systems can use AI to learn about users’ intent from these signals and then reflect back that knowledge to improve future search results. This process is called reflected intelligence.

REFLECTED INTELLIGENCE IS ALL ABOUT CREATING FEEDBACK LOOPS THAT CONSTANTLY LEARN AND IMPROVE, BASED ON EVOLVING USER INTERACTIONS.

Imagine you provide a search box for employees or customers to explore your data. When a user searches for some keywords, they see a set of search results. They then click on one or more results, or possibly run a different search. They may save or like a document, read an article or purchase a product. We record all these signals — the keywords, the results seen and the actions taken. AI can use these signals to learn patterns that can be applied on future searches to return better results.

REFLECTED INTELLIGENCE FEED- BACK LOOP. As different users search for “ipad” and interact with similar items, AI can process these user signals to learn what most users expect to see as results for that query.

Consider a search process for the term “ipad” on a retail website:

1. The customer searches for “ipad.”

2. The search engine determines if there are known patterns or models that should be applied to the query to match the best results.

3. The search engine returns search results.

4. The customer takes some actions on the results. These may be clicking on certain results, saving them, adding them to a shopping cart, or purchasing products.

5. These searches and actions (signals) are processed with AI in order to learn patterns and build models to improve future searches.

6. The next time a search is run (step 1), the models (step 5) are with this reflected intelligence process, the search experience changes from providing a static set of answers to a dynamic and always improving set of results.

THE IMPACT OF REFLECTED INTELLIGENCE.

An initial search for “ipad” returns keyword matches, but not what users expect. After using AI to learn from user signals, actual iPads become the top results.

In a static search system that doesn’t use reflected intelligence, we are likely to see results for “ipad cases,” “ipad chargers” and other items which contain the keyword but do not reflect our users’ real intent. By using AI to learn from our user signals, we can crowdsource the wisdom of our users and reflect it back to provide a better experience. In the case of our search for “ipad,” we now see actual tablets at the top of the search results, getting the customer to the purchase point even faster.

We can generalize this process of reflected intelligence as a continuous feedback loop where the user runs a search, sees results and takes action on those results, and the AI then takes those interactions, performs signal processing and machine learning, and creates learned relevance models that will then be applied the next time a user runs a search.

REFLECTED INTELLIGENCE PROCESS. A user runs a search, sees results, and takes actions. The AI then takes those actions, performs signal processing and machine learning, and creates learned relevance models that apply to future user searches.

While the previous example boosted products based upon user interactions, the AI can also learn many kinds of patterns and models. For example:

• When many people search for a keyword and then change a few characters and search for a similar keyword, the AI can learn a common misspelling.

• When many people search for the same or similar keywords, the AI can learn your business terminology from these signals, including common phrases, entities and related synonyms.

• When customers who interact with one set of items commonly interact with the same other items, the AI can learn to recommend similar items.

• Finally, when a single person interacts with items sharing similar attributes, it can learn their personal preferences (such as favorite brands, colors or topics).

LEARNED RELEVANCE MODELS. The AI can learn recommended items for users, personal preferences for users, related items for an item, and even most relevant items for a query, among many other insights.

These same techniques, applied to retail search here, can also be applied to any domain —including enterprise search — where you have people searching for content and interacting with it. As a chief data officer or data professional, your job is to help your organization manage and most effectively use its data. If you are not leveraging AI and reflected intelligence to learn from your user interactions and enhance your customer experience, you will find that spending some time and resources in this area can provide a very valuable return on investment.

Trey Grainger is the chief algorithms officer at Lucid works, where he drives vision and practical application of intelligent data science algorithms to power relevant search experiences for hundreds of the world’s biggest and brightest companies. He is author of the books “AI Powered Search” and “Solr in Action,” plus more than a dozen additional books, journal articles, and research publications covering industry-leading approaches to semantic search, recommendation systems, and intelligent information retrieval systems.