Back to blog
Articles
Articles
August 21, 2023
·
3 min read

Iterative Prompting Pre-Trained Large Language Models

August 21, 2023
|
3 min read

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Articles

Iterative Prompting Pre-Trained Large Language Models

COBUS GREYLING
August 21, 2023
.
3 min read

As opposed to model fine-tuning, prompt engineering is a lightweight approach to fine-tuning by ensuring the prompt holds contextual information via an iterative process.

As I always say, the complexity of any implementation needs to be accommodated somewhere.

In the case of LLMs, the model can be fine-tuned, or advanced prompting can be used in an autonomous agent or via prompt-chaining.

In the last two examples mentioned, the complexity is absorbed in prompt engineering and not via LLM fine-tuning.

The objective of contextual iterative prompting is to absorb the complexity demanded by a specific implementation; and not offload any model fine-tuning to the LLM.

Contextual iterative prompting is not a new approach, what this paper considers is the creation and automation of an iterative Context-Aware Prompter.

At each step (each dialog turn) the Prompter learns to process the query and previously gathered evidence, and composes a prompt which steers the LLM to recall the next piece of knowledge.

This process reminds of soft prompts and prompt tuning.

The paper claims that their proposed Context-Aware Prompter Designoutperforms existing prompting methods by notable margins.

The approach shepherds the LLM to recall a series of stored knowledge (e.g., C1 and C2) that is required for the multi-step inference (e.g., answering Q), analogous to how humans develop a “chain of thought” for complex decision making.

Source

The automated process establishes a contextual chain-of-thought and negates the generation of irrelevant facts and hallucination with dynamically synthesised prompts based on the current step context.

The paper does confirm the now accepted approach to prompting which include:

  1. Prompts need to be contextual, including previous conversation context and dialog turns.
  2. Some kind of automation will have to be implemented to collate, and in some instances summarise previous dialog turns to be included in the prompt.
  3. Supplementary data, acting as a contextual reference for the LLM, needs to be selected, curated and truncated to an efficient length for each prompt at inference.
  4. The prompt needs to be formed in such a way, not to increase inference time.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox