Back to blog
Articles
Articles
July 26, 2023
·
5 min read

Six GPT Best Practices For Improved Results

July 26, 2023
|
5 min read

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Articles

Six GPT Best Practices For Improved Results

COBUS GREYLING
July 26, 2023
.
5 min read

Here are six best practices to improve your prompt engineering results. When interacting with LLMs, you must have a vision of what you want to achieve and mimic the initiation of that vision. The process of mimicking is referred to as prompt design, prompt engineering or casting.

Here Are Six Strategies For Better Results

Write Detailed Prompts

To ensure a relevant response, make sure to include any important details or context in your requests. Failing to do so leaves the burden on the model to guess what you truly intend.

As far as possible, OpenAI advises users to provide detailed input to the LLM when performing prompt engineering. For instance, users should specify if they require longer answers, or brief replies.

Also if the LLM responses need to be simplified or if it is intended for exports. The best approach is to demonstrate the required response to the LLM.

Describe To The Model The Persona It Should Adopt

Below is an example of how, within the OpenAI playground, the persona is defined. This determines the style of LLM responses.

Clearly Segment Prompts

A well engineered prompt should have three components…context, data and continuation.

The context needs to be set, and this describes to the generation model what the objectives are.

The data will be used for the model to learn from.

And the continuation description instructs the generative model on how to continue. The continuation statement is used to inform the LLM on how to use the context and data. It can be used to summarise, extract key words, or have a conversation with a few dialog turns.

Below the prompt engineering elements:

With the advent of ChatML, users are mandated to segment prompts, as seen in the example below:

You can see the model is defined, and within messages, the role of system is defined with a description. The role of user is defined with contents, and the assistant.

Decompose The Sequence Of Steps To Complete The Task

This can also be referred to as chain-of-thought prompting with the aim to solicit chain-of-thought reasoning from the LLM.

In essence chain of thought reasoning can be achieve by creating intermediate reasoning steps to incorporate in the prompt.

Source

The ability of LLMs to perform complex reasoning improves the prompt results significantly.

Provide Examples via Few-Shot Training

The example below shows how a number of examples are given via a few-shot training approach, before the final answer is asked:

Provide The Output Length

You can request the model to generate outputs with a specific target length. This can be specified in terms of the count of words, sentences, paragraphs, or bullet points.

However, asking the model to generate an exact number of words is not very precise.

The model is more accurate in producing outputs with an exact number of paragraphs or bullet points.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox