Reaching targeted levels of intent recognition can be quickly achieved by verifying intents before deployment, rather than adopting a corrective strategy as a response.
Each organisation has a benchmark or percentage of successful intent recognition for their chatbot or voicebot. The percentage of successful intent recognition, or the percentage of none-intents are often part of the main dashboard and tracked metrics.
Firstly, it needs to be noted that there will always be a certain percentage of none-intents.
Something to keep in mind, digital assistants are linked to an organisation and addresses a finite number of products and services. Hence out-of-domain queries will occur and might register as none-intents.
For instance, at a large mobile operator, our aim was to limit the percentage of none intents to < 10%.
Classification of text and creating labels are very much a standard procedure in the AI world. The challenge though when it comes to digital assistants, is that the classification of the user utterances cannot be an asynchronous process, but needs to be synchronous.
The live conversations need to be classified (assigned to intents) in real-time as the conversation unfolds.
Hence the chatbot needs to have the classifications/intents preloaded, having a good sense of what the ambit of user conversations might entail.
Intent classification is best performed by using a corpus of text data. The text data should ideally be customer conversations, or utterances. And the text data can also be transcribed audio.
This data is then grouped in semantically similar clusters, each of these clusters constitute an intent and can be assigned a label (also referred to as an intent name).
These labeled intents can be considered as ground truth in terms of intents when it comes to coverage. This is also an effective way for solving for the long tail of intent distribution.
Subsequently a machine-learning process can be used for a “weak supervision” approach where new text data are automatically assigned to the ground-truthed intents.
Considering the image above, key elements of data labelling are:
- Human-In-The-Loop methodology
- Accelerated AI-Assisted latent space
- Intelligent intent detection and management at scale
- Intent splitting, merging, hierarchal or nested intents, deactivation of intents
- Detecting intent confusion and disambiguation
- Setting intent granularity and cluster sizes
New utterances which are not related to an existing intent are clustered in separate groupings and marked as new, and hence constitutes a new intent.
This process can also be referred to as Intent Driven Design & Development.
Added to this, often training data is synthetically produced or thought-up.
Subsequent to the chatbot launch, a catch-up process ensues where focus is placed on none-intents.
This negative approach misplaces the focus on none-intents (the conversation customers do not want to have), instead of placing the focus where it should be, establishing ground-truthed intents; hence the conversation customers want to have.
I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language. Including NLU design, evaluation & optimisation. Data-centric prompt tuning & LLM observability, evaluation & fine-tuning.