Caching LLM responses¶
This notebook demonstrates how to use Cassandra for a basic prompt/response cache.
Such a cache prevents running an LLM invocation more than once for the very same prompt, thus saving on latency and token usage. The cache retrieval logic is based on an exact match, as will be shown.
from langchain.cache import CassandraCache
A database connection is needed. (If on a Colab, the only supported option is the cloud service Astra DB.)
# Ensure loading of database credentials into environment variables:
import os
from dotenv import load_dotenv
load_dotenv("../../../.env")
import cassio
Select your choice of database by editing this cell, if needed:
database_mode = "cassandra" # "cassandra" / "astra_db"
if database_mode == "astra_db":
cassio.init(
database_id=os.environ["ASTRA_DB_ID"],
token=os.environ["ASTRA_DB_APPLICATION_TOKEN"],
keyspace=os.environ.get("ASTRA_DB_KEYSPACE"), # this is optional
)
if database_mode == "cassandra":
from cqlsession import getCassandraCQLSession, getCassandraCQLKeyspace
cassio.init(
session=getCassandraCQLSession(),
keyspace=getCassandraCQLKeyspace(),
)
Create a CassandraCache
and configure it globally for LangChain:
import langchain
langchain.llm_cache = CassandraCache(
session=None,
keyspace=None,
)
langchain.llm_cache.clear()
Below is the logic to instantiate the LLM of choice. We chose to leave it in the notebooks for clarity.
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
print('LLM from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM from OpenAI
%%time
SPIDER_QUESTION_FORM_1 = "How many eyes do spiders have?"
# The first time, it is not yet in cache, so it should take longer
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 17.8 ms, sys: 1.74 ms, total: 19.5 ms Wall time: 459 ms
'\n\nSpiders typically have eight eyes.'
%%time
# This time we expect a much shorter answer time
llm(SPIDER_QUESTION_FORM_1)
CPU times: user 1.73 ms, sys: 634 µs, total: 2.36 ms Wall time: 2.46 ms
'\n\nSpiders typically have eight eyes.'
%%time
SPIDER_QUESTION_FORM_2 = "How many eyes do spiders generally have?"
# This will again take 1-2 seconds, being a different string
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 5.27 ms, sys: 3 ms, total: 8.26 ms Wall time: 644 ms
'\n\nSpiders typically have eight eyes, arranged in two rows of four.'
Caching and Chat Models¶
The CassandraCache
supports caching within chat-oriented LangChain abstractions such as ChatOpenAI
as well:
(warning: the following is demonstrated with OpenAI only for the time being)
from langchain.chat_models import ChatOpenAI
chat_llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k", temperature=0)
%%time
print(chat_llm.predict("Are there spiders with wings?"))
No, there are no spiders with wings. Spiders belong to the class Arachnida, which includes creatures with eight legs and no wings. They rely on their silk-producing abilities to create webs and catch prey, rather than flying. CPU times: user 10.5 ms, sys: 2.37 ms, total: 12.9 ms Wall time: 3.2 s
%%time
# Expect a much faster response:
print(chat_llm.predict("Are there spiders with wings?"))
No, there are no spiders with wings. Spiders belong to the class Arachnida, which includes creatures with eight legs and no wings. They rely on their silk-producing abilities to create webs and catch prey, rather than flying. CPU times: user 4.38 ms, sys: 139 µs, total: 4.52 ms Wall time: 4.3 ms
(Actually, every object which inherits from the LangChain Generation
class can be seamlessly store and retrieved in this cache.)
Stale entry control¶
Time-To-Live (TTL)¶
You can configure a time-to-live property of the cache, with the effect of automatic eviction of cached entries after a certain time.
Setting langchain.llm_cache
to the following will have the effect that entries vanish in an hour (also supplying a custom table name is demonstrated):
cacheWithTTL = CassandraCache(
session=None,
keyspace=None,
table_name="langchain_llm_cache",
ttl_seconds=3600,
)
Manual cache eviction¶
Alternatively, you can invalidate cached entries one at a time - for that, you'll need to provide the very LLM this entry is associated to:
%%time
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 3.04 ms, sys: 0 ns, total: 3.04 ms Wall time: 3.2 ms
'\n\nSpiders typically have eight eyes, arranged in two rows of four.'
langchain.llm_cache.delete_through_llm(SPIDER_QUESTION_FORM_2, llm)
%%time
llm(SPIDER_QUESTION_FORM_2)
CPU times: user 10.3 ms, sys: 824 µs, total: 11.1 ms Wall time: 1.04 s
'\n\nSpiders typically have eight eyes, although some species may have as few as two or as many as twelve.'
Whole-cache deletion¶
As you might have seen at the beginning of this notebook, you can also clear the cache entirely: all stored entries, for all models, will be evicted at once:
langchain.llm_cache.clear()