Back to blog
Articles
Articles
July 31, 2023
·
5 min read

Flowise Now Has Custom Tools With OpenAI Function Calling

July 31, 2023
|
5 min read

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

Articles

Flowise Now Has Custom Tools With OpenAI Function Calling

COBUS GREYLING
July 31, 2023
.
5 min read

In the latest Flowise version, Custom Tools are introduced together with OpenAI Function Calling. In this article I cover a few practical implementations.

The latest release of Flowise accommodates OpenAI Function Calling, whereby the LLM returns structured data which is executed via Custom Tools.

The OpenAI LLM decides when to use the tool to structure data to be used in an API call.

Here is a practical step-by-step working example:

For this example we want to call the OpenWeather API and have the OpenAI function calling feature process the user input and format it into the right format.

Below is the weather API we will be calling by making use of OpenAI function calling:

https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.41&current_weather=true

Considering the image below, on the left-hand side of the FlowiseAI IDE, there is a new menu option called Tools.

From here, a new tool can be created by clicking on the blue Create Newbutton.

As seen below, there is an existing tool call Get_Weather.

Considering the image below, when a new tool is created, the tool can be named [1], and a description [2] must be given for the tool.

The tool description will help the Agent to understand which tool to select for each user request.

The parameters can then be defined via the IDE [3]. OpenAI Function Calling will take the user input and assign it to the different parameters listed in the output schema.

In the example [3] two parameters or variables are defined, Latitude and Longitude.

The type of number is defined, the description is used by OpenAI to know which entity from the user input, to be associated with which variable.

Hence Flowise facilitates the translation of the user input into a JSON document, and into this graphic output schema.

When scrolling down in the tools window, the Edit Tool is visible. Here Javascript Code can be entered. This is the function/Javascript code to execute when the tool is being used. The properties specified in the output schema can be used directly as variables here.

In our example, Latitude and longitude are defined in the output schema. These values are accessible in the Javascript just by referencing the variables $Latitude and $Longitude.

From the Marketplaces tab, the OpenAI Agent can be selected, this is an agent that uses OpenAI’s Function Calling functionality to pick the tool and variables to call.

For simplicity I deleted the Serper component from the flow.

As seen below, The Custom Tool Component has a dropdown from where the newly defined tool Get_Weather can be selected from the dropdown. The dropdown is populated with the names of all the defined tools within the IDE.

Below is a conversation with the agent, there are a few interesting points to note.

Firstly, the ease of integration between the conversational/LLM layer and the integration/API layer is astounding. The process of structuring unstructured conversational data and performing entity extraction is completely automated.

Secondly, even-though the API response is cryptic, the API response is transformed from structured data into unstructured conversational data.

Thirdly, the follow-up questions can be asked based on the response of the API.

The user question is: What is the weather currently at latitude 52.52 and longitude 13.41? .

The API response is:

{"latitude":52.52,"longitude":13.419998,"generationtime_ms":0.225067138671875,"utc_offset_seconds":0,"timezone":"GMT","timezone_abbreviation":"GMT","elevation":38.0,"current_weather":{"temperature":22.7,"windspeed":4.7,"winddirection":86.0,"weathercode":1,"is_day":0,"time":"2023-06-25T20:00"}}

And yet the chatbot conversational response is:

The current weather at latitude 52.52 and longitude 13.41 is 25.5°C with a windspeed of 8.1 m/s coming from the south (167°). It is currently daytime.

A follow-up question is asked: What is the elevation at latitude 52.52 and longitude 13.41?

With the chatbot response: The elevation at latitude 52.52 and longitude 13.41 is 38.0 meters.

This information is imbedded within the API response, and follow-up questions are answered by the Agent implicitly from the API response.

A follow-up question is then asked which is related to the API response, but the information is not imbedded in the API response :What big city is located there?

The response of the Agent is astoundingly accurate: The latitude 52.52 and longitude 13.41 corresponds to the city of Berlin, Germany.

However, here it is clear that the line between API-related and LLM-related responses are blurred.

Finally, the way in which tools can be segregated and design within Flowise, makes for easy creation and management of tools. Subsequently tools can be used and reused within multiple chat flows.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox