Back to blog
Articles
Articles
July 26, 2023
·
5 min read

Six GPT Best Practices For Improved Results

July 26, 2023
|
5 min read

Latest content

Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024
Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024

Let your data drive.

Articles

Six GPT Best Practices For Improved Results

COBUS GREYLING
July 26, 2023
.
5 min read

Here are six best practices to improve your prompt engineering results. When interacting with LLMs, you must have a vision of what you want to achieve and mimic the initiation of that vision. The process of mimicking is referred to as prompt design, prompt engineering or casting.

Here Are Six Strategies For Better Results

Write Detailed Prompts

To ensure a relevant response, make sure to include any important details or context in your requests. Failing to do so leaves the burden on the model to guess what you truly intend.

As far as possible, OpenAI advises users to provide detailed input to the LLM when performing prompt engineering. For instance, users should specify if they require longer answers, or brief replies.

Also if the LLM responses need to be simplified or if it is intended for exports. The best approach is to demonstrate the required response to the LLM.

Describe To The Model The Persona It Should Adopt

Below is an example of how, within the OpenAI playground, the persona is defined. This determines the style of LLM responses.

Clearly Segment Prompts

A well engineered prompt should have three components…context, data and continuation.

The context needs to be set, and this describes to the generation model what the objectives are.

The data will be used for the model to learn from.

And the continuation description instructs the generative model on how to continue. The continuation statement is used to inform the LLM on how to use the context and data. It can be used to summarise, extract key words, or have a conversation with a few dialog turns.

Below the prompt engineering elements:

With the advent of ChatML, users are mandated to segment prompts, as seen in the example below:

You can see the model is defined, and within messages, the role of system is defined with a description. The role of user is defined with contents, and the assistant.

Decompose The Sequence Of Steps To Complete The Task

This can also be referred to as chain-of-thought prompting with the aim to solicit chain-of-thought reasoning from the LLM.

In essence chain of thought reasoning can be achieve by creating intermediate reasoning steps to incorporate in the prompt.

Source

The ability of LLMs to perform complex reasoning improves the prompt results significantly.

Provide Examples via Few-Shot Training

The example below shows how a number of examples are given via a few-shot training approach, before the final answer is asked:

Provide The Output Length

You can request the model to generate outputs with a specific target length. This can be specified in terms of the count of words, sentences, paragraphs, or bullet points.

However, asking the model to generate an exact number of words is not very precise.

The model is more accurate in producing outputs with an exact number of paragraphs or bullet points.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox