Walking before running in conversational AI.

April 27, 2021

Conversational AI is often equated with chatbots. Yet chatbots rarely do the “AI” part correctly (i.e: understanding user intents).

Statements like “great conversational AI needs to move beyond dumb 'FAQ-bots' to capture context & provide personalization etc” are almost a mantra now in the conversational AI space - and that makes sense, as long as the bots also provide a great “dumb” FAQ experience in the first place.

Unfortunately the average chatbot is still littered with “sorry, I didn’t understand that”, while wasting time with chit chat and other filler dialog - it feels like a lot of “running before you can walk”.

Luckily I can point to chatbot experiences I’ve had recently, where the bot understood all my questions, and provided immediate value via answers or links to custom product experiences (for example AO.com, or the Quebec.ca Covid information bot); almost invariably, these experiences felt closer to a “search” than a “conversation” (other than the fact they were in a chatbot “widget”), and it seems like the teams behind them put more effort into scaling the natural language understanding (NLU) component, than in building out complex dialog flows.

This makes me wonder: is training NLU that can understand the long tail of your customers’ intents a necessary building block for good conversational AI? And if so, why isn’t the first step of any conversational AI project not to curate this NLU from existing voice-of-the-customer data (i.e: search and help center queries, customer support emails, live chat logs)?

I see major advantages in starting the digital transformation journey with the curation / training of this NLU, as opposed to immediately jumping into the end-user experience:

  • Starting with the NLU helps you understand exactly what intents and customer journeys your conversational AI will support, and validate that you have sufficiently high-quality data to train it from the start.
  • Knowing the scope of your AI's capabilities allows for better planning and execution of your tech roadmap: for example, "increase my credit limit", or "change my flight seats" is much faster to do with a customized form or product experience than within a conversational flow - knowing what kinds of integrations or customer journeys will need to be built, helps teams take better long-term technical decisions.
  • Seeing the NLU in action will spark ideas for how it can be applied to unlock other types of value across the organization; while building a chatbot might be the #1 reason for developing NLU today, it might become obvious that applying that same NLU to automatically tag customer support logs or improve the website search's results are high-value initiatives that can be easily unlocked.

We developed HumanFirst because we saw a massive opportunity to provide the data engineering capabilities necessary for companies to build and improve their NLU at scale, so I’m obviously tempted to say the gap we're seeing is because teams lacked the appropriate tooling until now 🙂

I’m curious to hear from those of you building conversational AI today: do you see any trends or changes in your way of approaching the roadmap that confirm or invalidate these assumptions?

HumanFirst is like Excel, for Natural Language Data. A complete productivity suite to transform natural language into business insights and AI training data.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox