Back to blog
Articles
May 25, 2023
·
2 MIN READ

Prior To Chatbot Deployment, It Is Essential that Intents are Ground-Truthed To Ensure Accuracy

May 25, 2023
|
2 MIN READ

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Prior To Chatbot Deployment, It Is Essential that Intents are Ground-Truthed To Ensure Accuracy

COBUS GREYLING
May 25, 2023
.
2 MIN READ

Reaching targeted levels of intent recognition can be quickly achieved by verifying intents before deployment, rather than adopting a corrective strategy as a response.

Intro

Each organisation has a benchmark or percentage of successful intent recognition for their chatbot or voicebot. The percentage of successful intent recognition, or the percentage of none-intents are often part of the main dashboard and tracked metrics.

Firstly, it needs to be noted that there will always be a certain percentage of none-intents.

Something to keep in mind, digital assistants are linked to an organisation and addresses a finite number of products and services. Hence out-of-domain queries will occur and might register as none-intents.

For instance, at a large mobile operator, our aim was to limit the percentage of none intents to < 10%.

The Challenge

Classification of text and creating labels are very much a standard procedure in the AI world. The challenge though when it comes to digital assistants, is that the classification of the user utterances cannot be an asynchronous process, but needs to be synchronous.

The live conversations need to be classified (assigned to intents) in real-time as the conversation unfolds.

Hence the chatbot needs to have the classifications/intents preloaded, having a good sense of what the ambit of user conversations might entail.

Ground-Truthed Intents

Intent classification is best performed by using a corpus of text data. The text data should ideally be customer conversations, or utterances. And the text data can also be transcribed audio.

This data is then grouped in semantically similar clusters, each of these clusters constitute an intent and can be assigned a label (also referred to as an intent name).

These labeled intents can be considered as ground truth in terms of intents when it comes to coverage. This is also an effective way for solving for the long tail of intent distribution.

Subsequently a machine-learning process can be used for a “weak supervision” approach where new text data are automatically assigned to the ground-truthed intents.

Considering the image above, key elements of data labelling are:

  • Human-In-The-Loop methodology
  • Accelerated AI-Assisted latent space
  • Intelligent intent detection and management at scale
  • Intent splitting, merging, hierarchal or nested intents, deactivation of intents
  • Detecting intent confusion and disambiguation
  • Setting intent granularity and cluster sizes

New utterances which are not related to an existing intent are clustered in separate groupings and marked as new, and hence constitutes a new intent.

This process can also be referred to as Intent Driven Design & Development.

Reactive Approach

Unfortunately most chatbot implementations do not follow a Data Centric approach of NLU Design with intents being deduced from business intents and not real-world customer conversations.

Added to this, often training data is synthetically produced or thought-up.

Subsequent to the chatbot launch, a catch-up process ensues where focus is placed on none-intents.

This negative approach misplaces the focus on none-intents (the conversation customers do not want to have), instead of placing the focus where it should be, establishing ground-truthed intents; hence the conversation customers want to have.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language. Including NLU design, evaluation & optimisation. Data-centric prompt tuning & LLM observability, evaluation & fine-tuning.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox