Haystack for Conversational AI

Here's how Haystack NLP framework helps to instrument the "long tail" of the AI-powered chatbot intents.

Chatbots are great for conversation-based tasks like customer service or in-house information retrieval. Once set up, they can handle requests 24/7 and thus improve the customer experience while significantly reducing costs for companies.

However, most implementations of chatbots aren’t equipped to deal with unusual requests. By combining your conversational AI with Haystack’s semantic question answering (QA), you can offer your users a more informative and useful chatbot experience.

What Are Chatbots?

Chatbots are also known as “conversational agents.” They serve to provide assistance interactively — so instead of reading through a manual, users can get help in a conversation-like setting. The first chatbot appeared over fifty years ago. Thanks to advances in machine learning, chatbots are now widely employed, most commonly for customer service.

How Do Chatbots Work?

When a chatbot receives a message from a user, it applies a machine learning technique known as “intent classification.” Once the intent behind the query has been classified, the bot sends the appropriate response back to the user. As with many things in natural language processing (NLP), this sounds simple in theory, but is more complex when it comes to real-world implementations.

Every chatbot works with a predefined set of user intents and matching responses. In a well-designed system, the predefined intents catch most queries submitted by users, but a small portion of queries still end up in the long tail of undefined intents. 

Secondly, most chatbots only trigger a response when the confidence value of their intent classifier is above a certain, usually quite high threshold. This serves to avoid sending out the wrong answers, which could lead to a frustrating user experience. In both cases, when an intent is not known, and when its confidence value is too low, the system either sends out an error message — or routes the conversation to a human agent. 

How Can Chatbots Benefit from Extractive QA?

Extractive question answering can cover the gaps in a chatbot’s predefined set of answers, without needing to involve a human. In extractive QA, a semantic search system extracts a text passage from a large database of documents to answer a given question. Queries and answers are matched semantically (on the basis of meaning) rather than lexically (by just comparing search terms). Our Haystack framework for composable NLP lets you build semantic search pipelines on top of your own knowledge base.

So if you have a collection of texts that may contain relevant information for your users, those can form the basis for an extractive QA system, which you can easily expose via a REST API. Then you can simply route the undefined requests from your chatbot to the question-answering pipeline, and leverage the knowledge that’s hidden in your documents. 

Read the docs on chatbot integration and learn how to supercharge a chatbot built in Rasa with a question answering pipeline using Haystack.

What Else Can You Do with Haystack?

While extractive question answering is the most popular application of the Haystack framework, it is by no means the only one. Our framework takes a modular approach to NLP pipelines and lets you add nodes for translating to and from other languages, summarizing longer text passages and many other use cases. You can even throw a semantic FAQ search into the mix. To learn more about what Haystack NLP has to offer, have a look at our documentation or the blog.

Try Haystack now!