Back to blog
Articles
Tutorials
January 24, 2024
·
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

January 24, 2024
|
4 min read

The last article in this series explored a workflow to streamline topic modeling with HumanFirst and Google Cloud. After importing CCAI-generated data stored in BigQuery to HumanFirst, we engineered a prompt to extract the main call driver from each conversation, testing the prompt first on a subset of data and checking the source conversations to ensure proper performance. We then applied the prompt across hundreds of conversations to arrive quickly at a high-level overview of key customer issues. 

With an easy way to group and label the conversations by call driver, we now have organized groups of data on key customer issues that we can mine for a deeper analysis. 

At this stage, many teams would export the labeled data to a dashboard or quantitative analysis tool to look for trends–the average amount of a refund, or the rate at which return queries are managed successfully. But quantitative dashboards depend on uniform inputs. LLMs are stochastic and generative; they might express the same idea in different words on a run-by-run basis. Teams risk losing time to programming and formatting if they jump from generated outputs to quantitative analytics. 

The HumanFirst platform affords a faster, more fool-proof, qualitative analysis by using prompts to explore the data within the platform. Following is a full exploration of that workflow.

Accelerating Text Data Analysis

Use prompts to analyze contact center data at scale to speed up time-to-insight.

To learn more from our now-organized datasets, we can create a new prompt. Working on ‘refund’ data, we might try something like this:

The LLM will analyze each conversation and return answers to the questions listed. By clicking on a conversation, we can quickly understand its content across those four metrics.

If we ungroup the generated output by conversation and cluster by similarity instead, we can see the data on a new dimension. We’ll have a group of conversations in which the customer was satisfied and another group in which they weren’t. Similarly, we’ll see groups of different refund reasons and different refund amounts.

Because the cluster function groups by similarity, it accounts for all of the formatting variability inherent to LLM analysis. Whether the model says ‘YES,’ ‘yes,’ or ‘Yes,’ to the question of customer satisfaction, semantic similarity will treat all of the answers the same.

We could choose to isolate all the conversations in which the customer wasn’t satisfied in order to consider gaps in agent protocols. We can curate that group, move it to the stash, create a new label–‘unsatisfied_refund’–and run prompts against this new subset of data to query it further.

In a non-technical, single-session analysis, we’ve accelerated and expanded the analysis of an important customer issue. We spent no time preparing the technology to deliver those insights; all of our engagement was directly with the data. This agile, task-specific, and non-technical workflow will help teams keep up with the velocity of insights that come in daily. Working from stronger data foundations will help companies improve their prioritization decisions, reduce costs, and fix real problems faster than they otherwise could.

The next article in this series dives deeper into using prompts to assess successful and unsuccessful conversations to improve agent performance. Stay tuned!

HumanFirst + Google Cloud

HumanFirst is now available on Google Cloud Marketplace. Want to learn more about using HumanFirst and Google Cloud for your specific business need? Book a demo or reach out to our team!

HumanFirst is a data-centric productivity platform designed to help companies find and solve problems with AI-powered workflows that combine prompt and data engineering. Experiment with raw data, surface insights, and build reliable solutions with speed, accuracy, and trust.

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Articles
5 min read

LangChain Hub

A few days ago LangChain launched the LangChain Hub…
September 21, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Tutorials

Accelerating Data Analysis with HumanFirst and Google Cloud

ALEX DUBOIS
January 24, 2024
.
4 min read

How to use HumanFirst with CCAI-generated data to accelerate data analysis.

The last article in this series explored a workflow to streamline topic modeling with HumanFirst and Google Cloud. After importing CCAI-generated data stored in BigQuery to HumanFirst, we engineered a prompt to extract the main call driver from each conversation, testing the prompt first on a subset of data and checking the source conversations to ensure proper performance. We then applied the prompt across hundreds of conversations to arrive quickly at a high-level overview of key customer issues. 

With an easy way to group and label the conversations by call driver, we now have organized groups of data on key customer issues that we can mine for a deeper analysis. 

At this stage, many teams would export the labeled data to a dashboard or quantitative analysis tool to look for trends–the average amount of a refund, or the rate at which return queries are managed successfully. But quantitative dashboards depend on uniform inputs. LLMs are stochastic and generative; they might express the same idea in different words on a run-by-run basis. Teams risk losing time to programming and formatting if they jump from generated outputs to quantitative analytics. 

The HumanFirst platform affords a faster, more fool-proof, qualitative analysis by using prompts to explore the data within the platform. Following is a full exploration of that workflow.

Accelerating Text Data Analysis

Use prompts to analyze contact center data at scale to speed up time-to-insight.

To learn more from our now-organized datasets, we can create a new prompt. Working on ‘refund’ data, we might try something like this:

The LLM will analyze each conversation and return answers to the questions listed. By clicking on a conversation, we can quickly understand its content across those four metrics.

If we ungroup the generated output by conversation and cluster by similarity instead, we can see the data on a new dimension. We’ll have a group of conversations in which the customer was satisfied and another group in which they weren’t. Similarly, we’ll see groups of different refund reasons and different refund amounts.

Because the cluster function groups by similarity, it accounts for all of the formatting variability inherent to LLM analysis. Whether the model says ‘YES,’ ‘yes,’ or ‘Yes,’ to the question of customer satisfaction, semantic similarity will treat all of the answers the same.

We could choose to isolate all the conversations in which the customer wasn’t satisfied in order to consider gaps in agent protocols. We can curate that group, move it to the stash, create a new label–‘unsatisfied_refund’–and run prompts against this new subset of data to query it further.

In a non-technical, single-session analysis, we’ve accelerated and expanded the analysis of an important customer issue. We spent no time preparing the technology to deliver those insights; all of our engagement was directly with the data. This agile, task-specific, and non-technical workflow will help teams keep up with the velocity of insights that come in daily. Working from stronger data foundations will help companies improve their prioritization decisions, reduce costs, and fix real problems faster than they otherwise could.

The next article in this series dives deeper into using prompts to assess successful and unsuccessful conversations to improve agent performance. Stay tuned!

HumanFirst + Google Cloud


HumanFirst is now available on Google Cloud Marketplace. Want to learn more about using HumanFirst and Google Cloud for your specific business need? Book a demo or reach out to our team!

HumanFirst is a data-centric productivity platform designed to help companies find and solve problems with AI-powered workflows that combine prompt and data engineering. Experiment with raw data, surface insights, and build reliable solutions with speed, accuracy, and trust.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox