Back to blog
Articles
May 5, 2023
·
3 min read

Users of the ChatGPT API Will Need To Keep track Of Context

May 5, 2023
|
3 min read

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Users of the ChatGPT API Will Need To Keep track Of Context

COBUS GREYLING
May 5, 2023
.
3 min read

ChatGPT is currently powered by gpt-3.5-turbo-0301, the most advanced OpenAI language model. Although OpenAI has made the API for this model accessible, the API does not automatically manage conversations contextually...

GPT-3.5-Turbo-0301 is effective at single-turn tasks, even without any conversational dialog turns.

Therefore, a ChatGPT implementation should be able to keep track of conversation context and logical connections in order to respond appropriately.

The ChatGPT models hold no memory of past requests, all relevant information must be supplied via the conversation.

The ChatML document submitted must include conversational history in order to maintain conversational context and manage dialog state. This allows the model to answer contextual questions by leveraging prior dialog turns.

OpenAI explicitly indicates that their models lack the capacity to retain memory of prior and subsequent queries. Therefore, all necessary information must be provided in the conversation.

Remember, if the conversation exceeds the model's token limit, it must be condensed. This can be done by creating a rolling log that stores the last n dialog turns for re-submission.

Below are a few practical examples. You can enter a sequence of messages and the model will return a text output, as seen below:

The Output:

Below is a different question relating to motor sport and the ChatML document submitted to the gpt-3.5-turbo-0301 model:

The correct response given to the last question:

Note, when a follow-up question is asked with no context provided in the ChatML document…as seen below:

The context is lost and the ChatGPT API states that fact.

What is the correct way to ask a follow-up question, providing the necessary contextual reference to the system?

The correct response is given:

Finally

OpenAI states that gpt-3.5-turbo-0301 does not pay strong attention to system messages, so extra focus is needed for instructions in a user message in addition to context management.

If the model's generated output is not satisfactory, it is necessary to try different approaches to improve it.

This could include making the instructions more clear, specifying the answer format, asking the model to go through the steps sequentially (chain-of-thought prompting), and even fine-tuning the base GPT-3 model (GPT-3.5-turbo models are not able to be fine-tuned).

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox