Back to blog
Articles
Articles
September 21, 2023
·
5 min read

LangChain Hub

September 21, 2023
|
5 min read

Latest content

Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024
Tutorials
4 min read

Accelerating Data Analysis with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to accelerate data analysis.
January 24, 2024
Tutorials
4 min read

Exploring Contact Center Data with HumanFirst and Google Cloud

How to use HumanFirst with CCAI-generated data to streamline topic modeling.
January 11, 2024
Articles
5 min

Building In Alignment: The Role of Observability in LLM-Led Conversational Design

Building In Alignment: The Role of Observability in LLM-Led Conversational Design
December 6, 2023
Articles
5 min read

Rivet Is An Open-Source Visual AI Programming Environment

Rivet is suited for building complex agents with LLM Prompts, and it was Open Sourced recently.
September 27, 2023
Articles
6 min read

What Is The Future Of Prompt Engineering?

The skill of Prompt Engineering has been touted as the ultimate skill of the future. But, will prompt engineering be around in the near future? In this article I attempt to decompose how the future LLM interface might look like…considering it will be conversational.
September 26, 2023
Articles
4 min read

LLM Drift

A recent study coined the term LLM Drift. LLM Drift is definite changes in LLM responses and behaviour, over a relatively short period of time.
September 25, 2023
Tutorials
5 min read

Optimizing RAG with Knowledge Base Maintenance

How to find gaps between knowledge base content and real user questions.
April 23, 2024
Tutorials
4 min read

Scaling Quality Assurance with HumanFirst and Google Cloud

How to use HumanFirst with Vertex AI to test, improve, and trust agent performance.
March 14, 2024
Announcements
2 min read

Full Circle: HumanFirst Welcomes Maeghan Smulders as COO

Personal and professional history might not repeat, but it certainly rhymes. I’m thrilled to join the team at HumanFirst, and reconnect with a team of founders I not only trust, but deeply admire.
February 13, 2024

Let your data drive.

A few days ago LangChain launched the LangChain Hub…

The LangChain Hub (Hub) is really an extension of the LangSmith studio environment and lives within the LangSmith web UI. LangSmith is constituted by three sub-environments, a project area, a data management area, and now the Hub

…as seen in the image below:

This new development feels like a very natural extension and progression of LangSmith. The hub is focussed on prompt engineering, and serves as a studio environment for uploading, browsing, pulling, testing and managing prompts.

LangSmith is very useful in inspecting and observing the behaviour of Generative Apps. The data component is also useful for benchmarking and testing.

This foray into a Prompt hub helps encode and aggregate best practices for different approaches to Prompt Engineering. The vision of LangChain is also one where Gen Apps will become LLM agnostic and different models will be used, or model migration will take place.

The Hub can help with testing, benchmarking and model migration.

The matrix below shows the basic structure into which the Prompts are categorised; four main categories: Use Cases, Type, Language and Models. The main categories and sub-categories are sure to grow in time.

In the image below a prompt is visible, with the detail of the prompt and how popular the prompt is in terms of likes, views, downloads, etc. Most importantly the prompt can be tested within the LangSmith playground area.

For each prompt there is background information, a chat template and how to use the prompt as an object in LangChain. Notice on the right panel the prompt details.

Lastly, the playground for testing the prompt, with the input on the left, the output and execution in the middle. And on the right the model settings.

LangSmith currently covers the aspects of data collection and performance management from a cost and latency perspective.

It is also possible to observe and inspect chains in detail for each node of chained prompts.

This functionality is backed up with data management, using the data to run tests, benchmark etc. And now PromptHub assist in the building of Generative Apps.

Some closing remarks:

  • It does seem that the future will be one where Generative Apps will become more model (LLM) agnostic and model migration will take place; with models becoming a utility.
  • Blue oceans are turning into red oceans very fast; and a myriad of applications and products are at threat due to developments like the expansion of LangSmith.
  • The ecosystem is still very nascent and major changes are bound to happen. It does seem that application developers do not want to offload functionality to LLMs, for a blackbox approach. And hence including the complexity rather in the prompt/pipeline phase instead of just leveraging LLM context windows and model capabilities.
  • LangChain, together with Haystack, are taking the lead in how the Generative App landscape is unfolding; from a open-source perspective.
  • It will not be surprising if LangSmith will sprawl into some form of application development (and not only application management) with Pipeline and Prompt Chaining options. A graphic approach to prompt testing, like ChainForge, will also make sense.
  • Having a tool which assist with chunking data for vector store / semantic search / RAG implementations, being able to test and tweak chunks will be of great value.
  • Added to the main categories of Use Cases, Type, Language and Models; could be a new category of prompt techniques. Which could consider techniques like, as shown in the image below.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

Subscribe to HumanFirst Blog

Get the latest posts delivered right to your inbox