
Autonomous Agents in the context of Large Language Models
With Large Language Models (LLMs) implementations expanding in scope, a number of requirements arise:
- The capacity to program LLMs and create reusable prompts.
- Seamless incorporation of prompts into larger applications.
- Sequence LLM interaction chains for larger applications.
- Automate chain-of-thought prompting via autonomous agents.
- Scaleable prompt pipelines to collect relevant data from various sources.
- Based on user input and constitute a prompt; and submit the prompt to a LLM.
“Any sufficiently advanced technology is indistinguishable from magic.”
- Arthur C. Clarke
With LLM related operations there is an obvious need for automation, which is taking the form of agents.
Prompt Chaining is the execution of a predetermined and set sequence of actions.
Agents do not follow a predetermined sequence of events and can maintain a high level of autonomy.
Agents have access to a set of tools and any request which falls within the ambit of these tools can be addressed by the agent.
The Execution pipeline lends autonomy to the Agent and a number of iterations might be required until the Agent reaches the Final Answer.

Actions which are executed by the agent involve:
- Using a tool
- Observing its output
- Cycling to another tool
- Returning output to the user
“Men have become the tools of their tools.”
- Henry David Thoreau
The diagram below shows how different action types are accessed and cycled through.
There is an observation, thought and eventually a final answer. The diagram shows how another action type might be invoked in cases where the final answer is not reached.
The output snipped below the diagram shows how the agent executes and how the chain is created in an autonomous fashion.

Taking LangChain as a reference, Agents have three concepts:
Tools
As shown earlier in the article, there are a number of tools which can be used and a tool can be seen as a function that performs a specific duty.
Tools include Google Search, Database lookup, Python REPL, or even invoking existing chains.
Within the LangChain framework, the interface for a tool is a function that is expected to have:
- String as an input,
- And string as an output.
LLM
This is the language model powering the agent. Below is an example how the LLM is defined within the agent:

Agent Types
Agents use a LLM to determine which actions to take and in what order. The agent creates a chain-of-thought sequence on the fly by decomposing the user request.
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. — Source
Agents are effective even in cases where the question is ambiguous and demands a multihop approach. This can be considered as an automated process of decomposing a complex question or instruction into a chain-of-thought process.
The image below illustrates the decomposition of the question well and how the question is answered in a piece meal chain-of-thought process:

Below is a list of agent types within the LangChain environment. Read more here for a full description of agent types.

Considering the image below, the only change made to the code was the AgentType description. The change in response is clearly visible in this image, with the exact same configuration used and only a different AgentType.

For complete working code examples of LangChain Agents, read more here.
I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.