LLMs can be divided into two categories: generative & predictive. The generative capabilities of LLMs have been the subject of much attention and discussion, and rightly so – they are incredibly impressive

The generative capabilities of LLMs have been the subject of much attention and discussion, and rightly so – they are incredibly impressive and often only require zero or few-shot learning.

The increasing popularity of Prompt Engineering has further highlighted the importance of generative tasks.

The image below shows the most common generative tasks from a Conversational AI Development Framework perspective, along with the predictive tasks.

The importance of correctly predicting an intent with a Large Language Model (LLM) is paramount, as the actions taken by a chatbot are based on this result.

To achieve this, both generative and predictive LLMs can be fine-tuned to create a custom model. OpenAI GPT-3, Ada is an example of a LLM that can be fine-tuned for classifying text into one of two classes, as seen in the image below.

As fine-tuning of LLMs becomes more commonplace, it will become the norm for mass adoption of LLMs in more formal and enterprise settings.

We are ready to begin!

The code below will allow us to access the training data from Sklearn . The command listed displays the various categories of data that have been archived from the original 20 newsgroups website.

These are the 20 categories available, from these we will make use of rec.autos and rec.motorcycles.

The code to fetch the two categories we are interested in, also assign the data to vehicles_dataset .

Below a record is printed of dataset:

The result shows that the data is disorganised and each entry has a high possibility of containing ambiguity or inaccuracy.

We can now determine how many records and examples we have for autos and motorcycles.

The printed result:

The next step is converting the data into JSON format defined by OpenAI here. Below is an example of the format.

The code to convert that data…

lastly, converting the data frame to a JSONL file named vehicles.jsonl:

Now the OpenAI utility can be used to analyse the JSONL file.

With the result of the analysis displayed below…

Now we can start the training process and from this point an OpenAI api key is required.

The command to start the fine-tuning is a single line, with the foundation GPT-3 model defined at the end. In this case it is ada. I wanted to make use of davinci, but the cost is extremely high as opposed to ada, which is one of the original base GPT-3 models.

The output from the training process.

And lastly, the model is queried with an arbitrary sentence: So how do I steer when my hands aren't on the bars?

The correct answer is given in motorcycles .

Another example with the sentence: Is countersteering like benchracing only with a taller seat, so your feet aren't on the floor?

And again the correct result is given as motorcycles .

As production implementations of LLMs become more widespread, more emphasis will be placed on fine-tuning them to maximise performance.

Nevertheless, the importance of fine-tuning LLMs is currently not being fully recognised.

I’m currently the Chief Evangelist @ HumanFirst. I explore and write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces and more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox