Back to blog
Articles
Articles
July 31, 2023
·
5 min read

Flowise Now Has Custom Tools With OpenAI Function Calling

July 31, 2023
|
5 min read

Latest content

Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024
Tutorials
4 min read

Building Prompts for Generators in Dialogflow CX

How to get started with generative features.
August 15, 2024
Announcements
3 min read

HumanFirst and Infobip Announce a Partnership to Equip Enterprise Teams with Data + Generative AI

With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
August 8, 2024
Tutorials
4 min read

Two Field-Tested Prompts for CX Teams

Get deeper insights from unstructured customer data with generative AI.
August 7, 2024
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Customer Stories
4 min read

How Infobip Generated 220+ Knowledge Articles with Gen AI For Smarter Self-Service and Better NPS

Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
September 16, 2024
Articles
7 min read

Non-Technical AI Adoption: The Value of & Path Towards Workforce-Wide AI

Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
September 12, 2024
Articles
6 min read

AI for CIOs: From One-Off Use to Company-Wide Value

A maturity model for three stages of AI adoption, including strategies for company leaders to progress to the next stage.
September 12, 2024

Let your data drive.

Articles

Flowise Now Has Custom Tools With OpenAI Function Calling

COBUS GREYLING
July 31, 2023
.
5 min read

In the latest Flowise version, Custom Tools are introduced together with OpenAI Function Calling. In this article I cover a few practical implementations.

The latest release of Flowise accommodates OpenAI Function Calling, whereby the LLM returns structured data which is executed via Custom Tools.

The OpenAI LLM decides when to use the tool to structure data to be used in an API call.

Here is a practical step-by-step working example:

For this example we want to call the OpenWeather API and have the OpenAI function calling feature process the user input and format it into the right format.

Below is the weather API we will be calling by making use of OpenAI function calling:

https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.41&current_weather=true

Considering the image below, on the left-hand side of the FlowiseAI IDE, there is a new menu option called Tools.

From here, a new tool can be created by clicking on the blue Create Newbutton.

As seen below, there is an existing tool call Get_Weather.

Considering the image below, when a new tool is created, the tool can be named [1], and a description [2] must be given for the tool.

The tool description will help the Agent to understand which tool to select for each user request.

The parameters can then be defined via the IDE [3]. OpenAI Function Calling will take the user input and assign it to the different parameters listed in the output schema.

In the example [3] two parameters or variables are defined, Latitude and Longitude.

The type of number is defined, the description is used by OpenAI to know which entity from the user input, to be associated with which variable.

Hence Flowise facilitates the translation of the user input into a JSON document, and into this graphic output schema.

When scrolling down in the tools window, the Edit Tool is visible. Here Javascript Code can be entered. This is the function/Javascript code to execute when the tool is being used. The properties specified in the output schema can be used directly as variables here.

In our example, Latitude and longitude are defined in the output schema. These values are accessible in the Javascript just by referencing the variables $Latitude and $Longitude.

From the Marketplaces tab, the OpenAI Agent can be selected, this is an agent that uses OpenAI’s Function Calling functionality to pick the tool and variables to call.

For simplicity I deleted the Serper component from the flow.

As seen below, The Custom Tool Component has a dropdown from where the newly defined tool Get_Weather can be selected from the dropdown. The dropdown is populated with the names of all the defined tools within the IDE.

Below is a conversation with the agent, there are a few interesting points to note.

Firstly, the ease of integration between the conversational/LLM layer and the integration/API layer is astounding. The process of structuring unstructured conversational data and performing entity extraction is completely automated.

Secondly, even-though the API response is cryptic, the API response is transformed from structured data into unstructured conversational data.

Thirdly, the follow-up questions can be asked based on the response of the API.

The user question is: What is the weather currently at latitude 52.52 and longitude 13.41? .

The API response is:

{"latitude":52.52,"longitude":13.419998,"generationtime_ms":0.225067138671875,"utc_offset_seconds":0,"timezone":"GMT","timezone_abbreviation":"GMT","elevation":38.0,"current_weather":{"temperature":22.7,"windspeed":4.7,"winddirection":86.0,"weathercode":1,"is_day":0,"time":"2023-06-25T20:00"}}

And yet the chatbot conversational response is:

The current weather at latitude 52.52 and longitude 13.41 is 25.5°C with a windspeed of 8.1 m/s coming from the south (167°). It is currently daytime.

A follow-up question is asked: What is the elevation at latitude 52.52 and longitude 13.41?

With the chatbot response: The elevation at latitude 52.52 and longitude 13.41 is 38.0 meters.

This information is imbedded within the API response, and follow-up questions are answered by the Agent implicitly from the API response.

A follow-up question is then asked which is related to the API response, but the information is not imbedded in the API response :What big city is located there?

The response of the Agent is astoundingly accurate: The latitude 52.52 and longitude 13.41 corresponds to the city of Berlin, Germany.

However, here it is clear that the line between API-related and LLM-related responses are blurred.

Finally, the way in which tools can be segregated and design within Flowise, makes for easy creation and management of tools. Subsequently tools can be used and reused within multiple chat flows.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox