Features
Data Insights
Explore your unstructured data and power buisness and AI intelligence.
NLU Design
Train, evaluate and continuously optimize custom NLU models using unstructured data.
NLG Design
Guarantee prompt performance and observe LLM input/output at scale (beta).
Resources
Blog
FAQs
Docs
Slack Community
Github
Partners
Google Cloud
Google Cloud and Human First make it design, test , and launch scalable AI prompts and models you can trust using your unstructured data.
About
Company
Careers
Resources
Data Insights
Explore your unstructured data using NLU and prompts
NLU Design
Train, evaluate and continuously improve custom NLU
models using unstructured data
NLG Design
Guarantee prompt performance, and observe
LLM input/output data at scale (beta)
LLM fine-tuning (coming soon)
Prepare labeled data to fine-tune LLMs
Partners
CompanySlack CommunityGithubSchedule a demo
About
CompanySlack CommunityGithubSchedule a demo
Contact sales
Contact sales

Add the intersection of NLU, LLMs and natural language data

RAG Evaluation

Retrieval Augmented Generation (RAG) is a very popular framework or class of LLM Application. The basic principle of RAG is to leverage external data sources to give LLMs contextual reference. In the recent past, I wrote much on different RAG approaches and pipelines. But how can we evaluate, measure and quantify the performance of a RAG pipeline?

COBUS GREYLING
5 min read
Articles

RAG Evaluation

Retrieval Augmented Generation (RAG) is a very popular framework or class of LLM Application. The basic principle of RAG is to leverage external data sources to give LLMs contextual reference. In the recent past, I wrote much on different RAG approaches and pipelines. But how can we evaluate, measure and quantify the performance of a RAG pipeline?

COBUS GREYLING
5 min read
Articles

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.

COBUS GREYLING
5 min read
Articles

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.

COBUS GREYLING
6 min read
Articles

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.

COBUS GREYLING
4 min read
Articles

LangChain Hub

A few days ago LangChain launched the LangChain Hub…

COBUS GREYLING
5 min read
Articles

Language Model Cascading & Probabilistic Programming Language

The term Language Model Cascading (LMC) was coined in July 2022, which seems like a lifetime ago considering the speed at which the LLM narrative arc develops…

COBUS GREYLING
4 min read
Articles

Comparing LLM Performance Against Prompt Techniques & Domain Specific Datasets

This study from August 2023 considers 10 different prompt techniques, over six LLMs and six data types.

COBUS GREYLING
4 min read
Articles

Does Submitting Long Context Solve All LLM Contextual Reference Challenges?

Large Language Models (LLMs) are known to hallucinate. Hallucination is when a LLM generates a highly succinct and highly plausible answer; but factually incorrect. Hallucination can be negated by injecting prompts with contextually relevant data which the LLM can reference.

COBUS GREYLING
4 min read
Articles

How Does Large Language Models Use Long Contexts?

And how to manage the performance and cost of large context input to LLMs.

COBUS GREYLING
4 min read
Articles

RAG & LLM Context Size

In this article I consider the growing context of various Large Language Models (LLMs) to what extent it can be used and how a principle like RAG applies.

COBUS GREYLING
Articles

Two LLM Based Autonomous Agents Debate Each Other

This working code example using the LangChain framework illustrates how two agents can debate each other after each agent has been assigned a persona and an objective. The agents have access to tools which they can leverage for their response.

COBUS GREYLING
4 min read
Articles

GPTBot

OpenAI’s Web Crawler

COBUS GREYLING
3 min read
Articles

Iterative Prompting Pre-Trained Large Language Models

As opposed to model fine-tuning, prompt engineering is a lightweight approach to fine-tuning by ensuring the prompt holds contextual information via an iterative process.

COBUS GREYLING
3 min read
Articles

OpenAI Discontinued Their AI Classifier For Identifying AI-Written Text

A while ago I took human & AI generated text from various sources, including LLMs & submitted it to the OpenAI Classifier. The objective was to gauge the classifier’s ability to detect the origin of text content.

COBUS GREYLING
7 min read
Articles

Emerging Large Language Model (LLM) Application Architecture

Due to the highly unstructured nature of Large Language Models (LLMs), there are thought and market shifts taking place on how to implement LLMs.

COBUS GREYLING
4 min read
Articles

12 Prompt Engineering Techniques

Prompt Engineering can be described as an art form, creating input requests for Large Language Models (LLMs) that will lead to a envisaged output. Here are twelve different techniques in crafting a single or a sequence of prompts.

COBUS GREYLING
7 min read
Articles

Prompt Tuning, Hard Prompts & Soft Prompts

Prompt Engineering is the method of accessing Large Language Models (LLMs), hence implementations like Pipelines, Agents, Prompt Chaining & more which are LLM based are all premised on some form of Prompt Engineering.

COBUS GREYLING
6 min read
Articles

Plan-And-Solve Prompting

The notion of fine-tuning a Large Language Models (LLMs) for very specific generative use-cases is in most instances not feasible. However, due to the flexibility of LLMs, variations in Prompt Engineering can yield astounding results. This article covers a new prompt method which improves LLM results in accuracy and completeness.

COBUS GREYLING
5 min read
Articles

These Are The Updates To ChainForge

ChainForge is an IDE for prompt engineering and a number of important improvements were made to the tool.

COBUS GREYLING
5 min read
Articles

Eight Prompt Engineering Implementations

In essence the discipline of Prompt Engineering is very simple and accessible. But as the LLM landscape develops, prompts are becoming programable and incorporated into more complex Generative AI structures.

COBUS GREYLING
6 min read
Articles

Flowise Now Has Custom Tools With OpenAI Function Calling

In the latest Flowise version, Custom Tools are introduced together with OpenAI Function Calling. In this article I cover a few practical implementations.

COBUS GREYLING
5 min read
Articles

Prompt Chaining & Large Language Models

What are the underlying requirements driving the need for prompt chaining? What defines prompt chaining and what are the essentials of a robust prompt chaining development tool?

COBUS GREYLING
5 min read
Articles

Six GPT Best Practices For Improved Results

Here are six best practices to improve your prompt engineering results. When interacting with LLMs, you must have a vision of what you want to achieve and mimic the initiation of that vision. The process of mimicking is referred to as prompt design, prompt engineering or casting.

COBUS GREYLING
5 min read
Articles

Retrieval Augmented Generation (RAG) Safeguards Against LLM Hallucination

A contextual reference increases LLM response accuracy and negates hallucination. In this article are a few practical examples to illustrate how explicit and relevant context should be part of prompt engineering.

COBUS GREYLING
4 min read
Articles

OpenAI GPT Chat Completions Accounts For 97% Of Usage

COBUS GREYLING
6 min read
Articles

Build Your Own ChatGPT or HuggingChat

By making use of haystack and Open Assistant, you are able to create a HuggingChat or ChatGPT like application.

COBUS GREYLING
5 min read
Articles

OpenAI Researched The Labor Market Impact Of GPT4

This research produced a list of occupations and their level of exposure with the advent of LLMs and Generative AI.

COBUS GREYLING
4 min read
Articles

ChatGPT APIs & Managing Conversation Context Memory

Currently, ChatGPT is powered by the most advanced OpenAI language models. While OpenAI has made the APIs available to these models, it does not inherently manage conversation context & memory.

COBUS GREYLING
4 min read
Articles

Practical Examples of OpenAI Function Calling

Three use-cases for OpenAI Function Calling with practical code examples.

COBUS GREYLING
6 Min Read
Articles

LLM Prompt Injection Attacks & Testing Vulnerabilities With ChainForge

Using the ChainForge IDE to batch test and measure prompt injection detection.

COBUS GREYLING
4 Minutes
Articles

Agents

Autonomous Agents in the context of Large Language Models

COBUS GREYLING
3 MIN READ

Prompt Chaining

There has been a recent surge in Visual Programming tools which enable developers to chain large language model prompts into an application, thus creating a conversational user interface.

COBUS GREYLING
5 MIN READ

Intent Creation & Extraction With Large Language Models

In previous articles, I argued that a data-centric approach should be taken when engineering training data for Natural Language Understanding (NLU). Building on this, this article will discuss the importance of creating and using intents when working with Large Language Models (LLMs).

COBUS GREYLING
4 MIN READ

Prior To Chatbot Deployment, It Is Essential that Intents are Ground-Truthed To Ensure Accuracy

Reaching targeted levels of intent recognition can be quickly achieved by verifying intents before deployment, rather than adopting a corrective strategy as a response.

COBUS GREYLING
2 MIN READ

Prompt Drift & Chaining

The notion to create workflows (chains) which leverage Large Language Models (LLMs) are necessary and needed. But there are a few considerations, one of which is Prompt Drift.

COBUS GREYLING
2 MIN READ

Generative AI & The New Category of LLM Powered Applications

New approaches and tools built upon the strength of Generative AI are emerging to create engaging conversational experiences.

COBUS GREYLING
4 MIN READ

The Anatomy Of Large Language Model (LLM) Powered Conversational Applications

In order to ensure the successful deployment of any application built on LLM API calls, it is essential to generate true business value from it.

COBUS GREYLING
4 MIN READ

Google Cloud Vertex AI & Generative AI

Google launched Vertex AI on 18 May 2021 at Google I/O & it seems like the product has faired well considering all the recent advances in LLMs and Generative AI

COBUS GREYLING
3 MIN READ

LLMs will not be taking the place of traditional chatbot NLU in the near future.

NLU pipelines are well-honed and excel in extremely precise tweaking of intents and entities with no significant expense and rapid iteration cycles.

COBUS GREYLING
3 MIN READ

Five Advantages of NLU

I gained an appreciation for the power of Natural Language Understanding (NLU) engines while experimenting with the predictive and classification capabilities of Large Language Models (LLMs).

COBUS GREYLING
2 MIN READ

Prompt Engineering, OpenAI & Modes

What role can prompt engineering play in preventing LLM hallucination, and what constitutes a good LLM prompt? Furthermore, how are OpenAI's models impacting this?

COBUS GREYLING
4 MIN READ

NLU & NLG Should Go Hand-In-Hand

Traditional NLU Can Be Leveraged By Following A Hybrid NLU & NLG Approach

COBUS GREYLING
3 MIN READ

How To Create A Custom Fine-Tuned Prediction Model Using Base GPT-3 models

LLMs can be divided into two categories: generative & predictive. The generative capabilities of LLMs have been the subject of much attention and discussion, and rightly so – they are incredibly impressive

COBUS GREYLING
7 MIN READ

Chat Markup Language (ChatML) Is Important For A Number Of Reasons

Here I will discuss why ChatML, introduced alongside OpenAI's ChatGPT and Whisper APIs on 1 March 2023, is an important development that should not be overlooked.

COBUS GREYLING
3 MIN READ

Users of the ChatGPT API Will Need To Keep track Of Context

ChatGPT is currently powered by gpt-3.5-turbo-0301, the most advanced OpenAI language model. Although OpenAI has made the API for this model accessible, the API does not automatically manage conversations contextually...

COBUS GREYLING
3 MIN READ

Example Code & Implementation Considerations For GPT 3.5 Turbo, ChatML & Whisper

A while ago OpenAI has released the API for The LLM gpt-3.5-turbo, the same model used in ChatGPT. Additionally, the Whisper speech-to-text large-v2 model is available through an API for transcription.

COBUS GREYLING
4 MIN READ

OpenAI Mode Specific Models

OpenAI has implemented modes in their playground and development interface, each one having its own dedicated Large Language Model (LLM).

COBUS GREYLING
3 MIN READ

HumanFirst & Cohere QuickStart Guide

To start using the HumanFirst [https://www.humanfirst.ai] / Cohere [https://www.cohere.ai] integration you will need a Cohere [https://cohere.ai] API key...in order to get your

COBUS GREYLING
3 MIN READ
Articles

The OpenAI GPT-3.5 Turbo Model Has A 16k Context Window

OpenAI has unveiled a new model, dubbed "gpt-3.5-turbo-16k," and I was able to submit a 14-page document to the model for summarisation.

COBUS GREYLING
5 Min read
Articles

LangChain Hub

A few days ago LangChain launched the LangChain Hub…

COBUS GREYLING
5 min read
Resources
Resources
  • Blog
  • Docs
  • APIs
  • Academy
  • FAQs
  • GitHub
Connect
  • Book a demo
  • LinkedIn
  • Twitter
  • Slack
Company
  • About HumanFirst
  • Careers
  • Press & Media
  • Contact us