Back to blog
Articles
Articles
June 9, 2023
·
3 MIN READ

Agents

June 9, 2023
|
3 MIN READ

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Autonomous Agents in the context of Large Language Models

With Large Language Models (LLMs) implementations expanding in scope, a number of requirements arise:

  1. The capacity to program LLMs and create reusable prompts.
  2. Seamless incorporation of prompts into larger applications.
  3. Sequence LLM interaction chains for larger applications.
  4. Automate chain-of-thought prompting via autonomous agents.
  5. Scaleable prompt pipelines to collect relevant data from various sources.
  6. Based on user input and constitute a prompt; and submit the prompt to a LLM.
“Any sufficiently advanced technology is indistinguishable from magic.”
- Arthur C. Clarke

With LLM related operations there is an obvious need for automation, which is taking the form of agents.

Prompt Chaining is the execution of a predetermined and set sequence of actions.

Agents do not follow a predetermined sequence of events and can maintain a high level of autonomy.

Agents have access to a set of tools and any request which falls within the ambit of these tools can be addressed by the agent.

The Execution pipeline lends autonomy to the Agent and a number of iterations might be required until the Agent reaches the Final Answer.

Actions which are executed by the agent involve:

  1. Using a tool
  2. Observing its output
  3. Cycling to another tool
  4. Returning output to the user
“Men have become the tools of their tools.”

- Henry David Thoreau

The diagram below shows how different action types are accessed and cycled through.

There is an observation, thought and eventually a final answer. The diagram shows how another action type might be invoked in cases where the final answer is not reached.

The output snipped below the diagram shows how the agent executes and how the chain is created in an autonomous fashion.

Taking LangChain as a reference, Agents have three concepts:

Tools

As shown earlier in the article, there are a number of tools which can be used and a tool can be seen as a function that performs a specific duty.

Tools include Google Search, Database lookup, Python REPL, or even invoking existing chains.

Within the LangChain framework, the interface for a tool is a function that is expected to have:

  1. String as an input,
  2. And string as an output.

LLM

This is the language model powering the agent. Below is an example how the LLM is defined within the agent:

Agent Types

Agents use a LLM to determine which actions to take and in what order. The agent creates a chain-of-thought sequence on the fly by decomposing the user request.

Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. — Source

Agents are effective even in cases where the question is ambiguous and demands a multihop approach. This can be considered as an automated process of decomposing a complex question or instruction into a chain-of-thought process.

The image below illustrates the decomposition of the question well and how the question is answered in a piece meal chain-of-thought process:

Below is a list of agent types within the LangChain environment. Read more here for a full description of agent types.

Source

Considering the image below, the only change made to the code was the AgentType description. The change in response is clearly visible in this image, with the exact same configuration used and only a different AgentType.

For complete working code examples of LangChain Agents, read more here.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox