HumanFirst enables you to derive insights from call center transcripts, chat logs, customer reviews, surveys, and qualitative feedback
While the number of conversational AI platforms exploded over the last few years, the tooling to build, curate and improve the AI training data that goes into these platforms did not.
We saw teams turning to Excel, or building (and maintaining) their own tooling and processes to do this work. Both of these alternatives lead to inefficiencies, frustration, high cost - and delay the time to market of projects.
Our vision at HumanFirst is to make the entire process of discovering, training and improving intents from raw natural language data productized, and doable regardless of skillset. We productize and maintain the most advanced data pipeline and platform to address this gap in the ML/AI tooling ecosystem.
HumanFirst is trusted today by hundreds of small and medium-sized customers, and used to accelerate development by enterprise companies like Five9, Desjardins, Chime and T-mobile.
No! We don't replace your existing chatbot or NLU platform.
HumanFirst is a hyper efficient tool for building and maintaining the data that powers your chatbot or NLU.
For example, HumanFirst will help you find and create training data from your unlabeled data to make your intents better. But it doesn't stop there!
It also allows you to organize and curate this data to make it really easy to re-use across different projects- without resorting to Excel.
Finally, your unlabeled data (and especially voice of the customer data) can inform business decisions and development priorities - HumanFirst allows you to explore your data with machine-learning powered workflows that make it easy, regardless of technical skills.
Our pricing is based on a combination of a per-user seat fee along with a maximum quantity of data processed by the system. Some additional features are added on top of the base per-user fee and are to be paid once for the entire organization.
All plans include a 7-day free trial, during which time you can try out the product without any limitations, and cancel at anytime during this period, for any reason.
Your credit card will not be charged before the end of this period.
No, each user of the platform must have his/her own seat.
We provide discounts for customers who need to process large volumes of data, or who commit to yearly subscriptions. We also provide discounts on our regular pricing for early-stage companies and startups, as well as for non-profits.
Please contact us if any of these scenarios apply to you!
Every plan allows you to create unlimited workspaces and intents within HumanFirst, and perform as much labeling, semantic searches, clustering and disambiguation of your data.
Your plan determines how many data points can be managed at any given time in HumanFirst. Your usage is an aggregate of all data points across workspaces.
Data points are single utterances, whether labeled or unlabeled.
In the case of conversational data, every input counts as a single data point.
Labeling utterances doesn't affect your usage, since you're not creating new utterances.
Deleting datasets frees up data points.
The advanced capabilities add-on includes features for tracking revisions of your data, and for incorporating trained NLU models and results into the HumanFirst experience.
With our NLU features, you can test the performance of your data in real-time, directly within HumanFirst, against our own NLU and third-party providers (Rasa is supported for now, with DialogFlow and Watson available soon).
Contact us if you'd like to connect another third party NLU.
We can do a fully managed cloud based on-premise version in the region of your choice (Google Cloud Platform is prefered). The on-premise version can also work in fully airgapped environment, without any connection to our servers.
We provide custom training, development and data engineering services for enterprise customers who commit to a minimum bank of 100 hours. Please contact us to learn more.
Within an intent, you can sort training phrases by confusion and get an actionable confusion matrix to see with what intent(s) is the intent confused with. This unlocks a disambiguation workflow to easily re-assign the confused phrases to the correct intent. The data used for this operation can be sourced from an automated cross-validation test with the NLU engine of your choice.
The results of cross-validation runs are available through our API.
Yes. Training phrases can be annotated with entities, and this entity information is carried over when exporting a workspace in a compatible format.
A sequence of actions done between revisions can be exported, this includes created/edited/moved/deleted intents, along with any related training phrases and the author of each change. These actions can be viewed online, or exported through our command line tool.
Intent data can be exported from the web ui, from our command tool, or from an HTTPS API call.
Our command line tool includes a mechanism to extract conversation data (in rasa's event log format), which our backend natively supports. A Rasa deployment (with a custom nlu pipeline) can be used in the product, and will power confusion metrics (through cross-validation) and active learning metrics (sorting by entropy, uncertainty, etc.) when dealing with unlabelled data
Through our command-line tool, users can easily script imports and exports from external version control systems. We recommend that customers use our CLI from their typical continuous integration workflows.
We do not support redaction within our platform, but we refer customers to open source solutions like Microsoft Presidio in order to redact their data before importing it.