Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
With a one-click integration to Conversations, Infobip’s contact center solution, HumanFirst helps enterprise teams leverage LLMs to analyze 100% of their customer data.
Partnering with HumanFirst, Infobip generated over 220 knowledge articles, unlocked 30% of their agents' time, and improved containment by a projected 15%.
Reviewing the state of employee experimentation and organizational adoption, and exploring the shifts in thinking, tooling, and training required for workforce-wide AI.
This study from August 2023 considers 10 different prompt techniques, over six LLMs and six data types.
This study compared 10 different zero-shot prompt reasoning strategies over six LLMs (davinci-002, davinci-003, GPT-3.5-turbo, GPT-4, Flan-T5-xxl & Cohere command-xlarge) referencing six QA datasets ranging from scientific to medical domains.
Some notable findings were:
As is visible in the graphed data below, some models are optimised for specific prompting strategies and data domains.
Gains from Chain-Of-Thought (CoT) reasoning strategies are effective across domains and LLMs.
GPT-4 has the best performance across data domains and prompt techniques.
The header image depicts the performance of each of the six LLMs used in the study and their respective overall performances.
The image below shows the 10 prompt techniques used in the study, with an example of each prompt, and the score achieved by each prompt technique. The scores shown here are specifically related to the GPT-4 model.
The prompt template structure used…
The {instruction} is placed before the question and answer choices.
With the {question} being the multiple-choice question that the model is expected to answer.
The {answer_choices} are the options provided for the multiple-choice question.
The {cot_trigger} is placed after the question.
{instruction}
{question}
{answer_choices}
{cot_trigger}
The image below depicts the performance of the various prompting techniques (vertical) against LLMs performance (horizontal).
Something I found interesting is that Google’s FLan-T5-XXL model does not follow the trend of improved performance with the Zhou prompting technique.
And also the Cohere models seems to have a significant deprecation in performance with the Kojima prompting technique.
The table below taken from the paper shows the six datasets with a description of each set.
And the performance of each LLM based on the six datasets. The toughest datasets to navigate for the LLM were MedQA, MedMCQA and arguably OpenBookQA.
Throughout the study it is evident that GPT-4’s performance is stellar. Noticeable is Google’s good performance in OpenBookQA.
I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.
Subscribe to HumanFirst Blog
Get the latest posts delivered right to your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.