About LangChain
LangChain is a popular and rapidly evolving framework to automate most of the management of, and interaction with, large language models (LLMs): among its features are support for memory, vector-based similarity search, an advanced prompt templating abstraction and much more.
LangChain comes with a Python and a Javascript implementation. This section targets the Python version.
Info
Most of the examples in this section can run straight away as Colab notebooks, provided you have checked the pre-requisites.
If you prefer to run in local Jupyter, set up the LangChain Python environment first.
Note with the exception of the items marked as [Preview]
below,
all other components are in the latest LangChain release and can be used
straight away after doing a pip install langchain
. To allow experimenting with
the preview elements, however, the local environment setup will install
our preview fork of LangChain throughout. You do not need to depart from
pip install langchain
if you're not interested in the preview components.
Available components
CassIO seamlessly integrates with LangChain, offering Cassandra-specific tools for many tasks. Almost all of the following examples can run as Colab notebooks straight away (check out the icon at the top of each page):
- A memory module for LLMs that uses Cassandra for storage;
- ... that can be used to "remember" the recent exchanges in a chat interaction;
- ... including keeping a summary of the whole past conversation.
- A facility for caching LLM responses on Cassandra, thereby saving on latency and tokens where possible.
Additionally, the "Vector Search" capabilities that are being added to Cassandra / Astra DB enables another set of "semantically aware" tools:
- A cache of LLM responses that is oblivious to the exact form a test is phrased.
- A "semantic index" that can store a knowledge base and retrieve its relevant parts to buil the best answer to a given question ("QA use case");
- ... with support for metadata filtering to narrow down vector similarity queries;
- ... whose usage can be adapted to suit many specific needs.
- ... and that can be configured to retrieve pieces of information as diverse as possible to maximize the actual information flowing to the answer.
- A "semantic memory" element for inclusion in LLM chat interactions, that can retrieve relevant past exchanges even if occurred in the far past.
Last, there is a set of components around zero-boilerplate prompt templating using Cassandra as the source of data. Note: this is still in preview as of December 10th, 2023 and requires installation from the fork to be used.
[Preview]
Automatic injection of data from Cassandra into a prompt;[Preview]
Automatic injection of data from a Feast feature store (e.g. backed by Cassandra) into a prompt.[Preview]
A detour to look at the very engine powering "database-bound prompt templates"... (useful to develop templates for automatic extraction of data from custom sources).
This list will grow over time as new needs are addressed and the current extensions are refined.