HumanFirst enables you to derive insights from call center transcripts, chat logs, customer reviews, surveys, and qualitative feedback
While the last few years have seen an explosion in conversational AI platforms (Cognigy, Rasa etc), NLU API providers (DialogFlow, Watson, Nuance etc), as well as large-scale language models (OpenAI, GPT-3, Cohere etc), the tooling to explore, label and curate the data that is fed into these platforms did not evolve.
We saw teams turning to Excel, or building (and maintaining) their own tooling and processes to do this work. Both of these alternatives lead to inefficiencies, frustration, high cost - and delay the time to market of projects.
Our vision at HumanFirst is to make the entire process of building NLU-ready datasets doable regardless of skillset. We productize and maintain the most advanced data pipeline and platform to address this gap in the ML/AI tooling ecosystem.
HumanFirst is trusted today by hundreds of small, medium and enterprise customers.
No, HumanFirst is a general purpose data management layer, that integrates with tools like DialogFlow.
It provides a hyper efficient environment for building and maintaining the data that powers NLU: for example, HumanFirst will help you find and create training data from your unlabeled data to make your intents better; it also allows you to organize and curate this data to make it really easy to re-use across different projects.
Finally, your unlabeled data can inform business decisions and development priorities - HumanFirst allows you to explore your data with machine-learning powered workflows that make it easy, regardless of technical skills.
Our pricing is based on a combination of a per-user seat fee along with a maximum quantity of data processed by the system. Some additional features are added on top of the base per-user fee and are to be paid once for the entire organization.
All plans include a 7-day free trial, during which time you can try out the product without any limitations, and cancel at anytime during this period, for any reason.
Your credit card will not be charged before the end of this period.
No, each user of the platform must have his/her own seat.
We provide discounts for customers who need to process large volumes of data, or who commit to yearly subscriptions. We also provide discounts on our regular pricing for early-stage companies and startups, as well as for non-profits.
Please contact us if any of these scenarios apply to you!
Every plan allows you to create unlimited workspaces and intents within HumanFirst, and perform as much labeling, semantic searches, clustering and disambiguation of your data.
Your plan determines how many data points can be managed at any given time in HumanFirst. Your usage is an aggregate of all data points across workspaces.
Data points are single utterances, whether labeled or unlabeled.
In the case of conversational data, every input counts as a single data point.
Labeling utterances doesn't affect your usage, since you're not creating new utterances.
Deleting datasets frees up data points.
The advanced capabilities add-on includes features for tracking revisions of your data, and for incorporating trained NLU models and results into the HumanFirst experience.
With our NLU features, you can test the performance of your data in real-time, directly within HumanFirst, against our own NLU and third-party providers (DialogFlow and Rasa are supported, with Watson available soon).
Contact us if you'd like to connect another third party NLU.
We can do a fully managed cloud based on-premise version in the region of your choice (Google Cloud Platform is prefered). The on-premise version can also work in fully airgapped environment, without any connection to our servers.
We provide custom training, development and data engineering services for enterprise customers who commit to a minimum bank of 100 hours. Please contact us to learn more.
Within an intent, you can sort training phrases by confusion and get an actionable confusion matrix to see with what intent(s) is the intent confused with. This unlocks a disambiguation workflow to easily re-assign the confused phrases to the correct intent. The data used for this operation can be sourced from an automated cross-validation test with the NLU engine of your choice.
The results of cross-validation runs are available through our API.
Yes. Training phrases can be annotated with entities, and this entity information is carried over when exporting a workspace in a compatible format.
A sequence of actions done between revisions can be exported, this includes created/edited/moved/deleted intents, along with any related training phrases and the author of each change. These actions can be viewed online, or exported through our command line tool.
Intent data can be exported from the web ui, from our command tool, or from an HTTPS API call.
Through our command-line tool, users can easily script imports and exports from external version control systems. We recommend that customers use our CLI from their typical continuous integration workflows.
We do not support redaction within our platform, but we refer customers to open source solutions like Microsoft Presidio in order to redact their data before importing it.