Back to blog
Articles
Articles
August 2, 2023
·
6 min read

Eight Prompt Engineering Implementations

August 2, 2023
|
6 min read

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Articles

Eight Prompt Engineering Implementations

COBUS GREYLING
August 2, 2023
.
6 min read

In essence the discipline of Prompt Engineering is very simple and accessible. But as the LLM landscape develops, prompts are becoming programable and incorporated into more complex Generative AI structures.

Considering the image above, the implementation of Prompt Engineering within Large Language Model applications can be divided into eight broad categories:

  1. Static Prompts
  2. Prompt Composition
  3. Prompt Templates
  4. Contextual Prompts
  5. Prompt Chaining
  6. Prompt Pipelines (Retrieval Augmented Generations)
  7. Autonomous Agents
  8. Prompt Tuning / Soft Prompts

Static Prompts

The generative capabilities of LLMs are one of their key features which can be leveraged; with Prompt Engineering as the avenue with which data is presented and hence dictate the way the LLM executes on the data.

Prompts can follow a zero, single or few shot learning approach. The generative capabilities of LLMs are greatly enhanced by following a one-shot or few-shot learning approach by including example data in the prompt.

A powerful tool is to show LLMs via examples within a prompt how to go about and format output.

A static prompt is just that, plain text with no templating, injection or external information.

Read more here.

Prompt Templates

The next step from static prompts is prompt templating.

Prompt templating is where a static prompt is converted into a template with key values being replaced with placeholders. The placeholders are replaced with application values/variables at runtime.

Some refer to templating as entity injection or prompt injection.

In the template example below from DUST you can see the placeholders of ${EXAMPlES:question}, ${EXAMPlES:answer} and ${QUESTIONS:question} and these placeholders are replaced with values at runtime.

Prompt templating allows for prompts to the stored, re-used, shared, and programmed. Generative prompts can be incorporated into programs for programming, storage and re-use.

Read more here.

Prompt Composition

A next step is to have a library of prompt templates which are combined at run-time to create a more advanced prompt. As prompt composition add a level of flexibility and programability, it also introduces quite a bit of complexity.

The notebook in the article below illustrates how a contextual prompt is composed or constituted from different templates, each template has placeholders for variable injection. Hence part of a prompt can be reused.

Read more here.

Contextual Prompts

Even for LLMs, context is very important for increased accuracy and solving for hallucination. From the examples in the article below it is clear that a little context can go a long way in improving the accuracy of engineered prompts.

Contextual prompting negates LLM hallucination to a large extent.

Contextual prompting provides a frame of reference to the LLM when a response is generated.

Read more here.

Prompt Chaining

Prompt Chaining, also referred to as Large Language Model (LLM) Chaining, is the notion of creating a chain consisting of a series of model calls. This series of calls follow on each other with the output of one node in the chain serving as the input of the following.

Each chain node is intended to target small and well scoped sub-tasks, hence one or more LLMs is used to address multiple sequenced sub-components of a task.

In essence prompt chaining leverages a key principle in prompt engineering, known as chain of thought prompting.

The principle of Chain of Thought prompting is not only used in chaining, but also in Agents and Prompt Engineering.

Chain of thought prompting is the notion of decomposing a complex task into refined smaller tasks, building up to the final answer.

Read more here.

Prompt Pipelines

In Machine Learning a pipeline can be described as and end-to-end construct, which orchestrates a flow of events and data.

The pipeline is kicked-off or initiated by a trigger; and based on certain events and parameters, a flow is followed which results in an output.

In the case of a prompt pipeline, the flow is in most cases initiated by a user request. The request is directed to a specific prompt template.

Prompt Pipelines can also be described as an intelligent extension to prompt templates.

The variables or placeholders in the pre-defined prompt template are populated (also known as prompt injection) with the question from the user, and the knowledge to be searched from the knowledge store.

This process can be highly dynamic and autonomous, crafting the desired prompt with the given scenario containing contextual information for few-shot training.

Read more here.

Autonomous Agents

With LLM related operations there is an obvious need for automation. Currently this automation is in the form of what is called agents.

Prompt Chaining is the execution of a predetermined and set sequence of actions.

The attraction of Agents is that Agents do not follow a predetermined sequence of events. Agents can maintain a high level of autonomy.

Agents have access to a set of tools and any request which falls within the ambit of these tools can be addressed by the agent. The Execution pipeline lends autonomy to the Agent and a number of iterations might be required until the Agent reaches the Final Answer.

Read more here.

Prompt Tuning / Soft Prompts

Soft prompts are created during the process of prompt tuning.

Unlike hard prompts, soft prompts cannot be viewed and edited in text. Prompts consist of an embedding, a string of numbers, that derives knowledge from the larger model.

So for sure, a disadvantage is the lack of interpretability of soft prompts. The AI discovers prompts relevant for a specific task but can’t explain why it chose those embeddings. Like deep learning models themselves, soft prompts are opaque.

Soft prompts act as a substitute for additional training data. Researchers recently estimated that a good language classifier prompt is worth hundreds to thousands of extra data points.

Read more here.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox