Conversation Buffer Memory¶
The "base memory class" seen in the previous example is now put to use in a higher-level abstraction provided by LangChain:
In [1]:
Copied!
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
A database connection is needed. (If on a Colab, the only supported option is the cloud service Astra DB.)
In [2]:
Copied!
# Ensure loading of database credentials into environment variables:
import os
from dotenv import load_dotenv
load_dotenv("../../../.env")
import cassio
# Ensure loading of database credentials into environment variables:
import os
from dotenv import load_dotenv
load_dotenv("../../../.env")
import cassio
Select your choice of database by editing this cell, if needed:
In [3]:
Copied!
database_mode = "cassandra" # "cassandra" / "astra_db"
database_mode = "cassandra" # "cassandra" / "astra_db"
In [4]:
Copied!
if database_mode == "astra_db":
cassio.init(
database_id=os.environ["ASTRA_DB_ID"],
token=os.environ["ASTRA_DB_APPLICATION_TOKEN"],
keyspace=os.environ.get("ASTRA_DB_KEYSPACE"), # this is optional
)
if database_mode == "astra_db":
cassio.init(
database_id=os.environ["ASTRA_DB_ID"],
token=os.environ["ASTRA_DB_APPLICATION_TOKEN"],
keyspace=os.environ.get("ASTRA_DB_KEYSPACE"), # this is optional
)
In [5]:
Copied!
if database_mode == "cassandra":
from cqlsession import getCassandraCQLSession, getCassandraCQLKeyspace
cassio.init(
session=getCassandraCQLSession(),
keyspace=getCassandraCQLKeyspace(),
)
if database_mode == "cassandra":
from cqlsession import getCassandraCQLSession, getCassandraCQLKeyspace
cassio.init(
session=getCassandraCQLSession(),
keyspace=getCassandraCQLKeyspace(),
)
In [6]:
Copied!
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=None,
keyspace=None,
ttl_seconds = 3600,
)
message_history.clear()
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=None,
keyspace=None,
ttl_seconds = 3600,
)
message_history.clear()
Use in a ConversationChain¶
Create a Memory¶
The Cassandra message history is specified:
In [7]:
Copied!
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
Language model¶
Below is the logic to instantiate the LLM of choice. We chose to leave it in the notebooks for clarity.
In [8]:
Copied!
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
print('LLM from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
print('LLM from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM from OpenAI
Create a chain¶
As the conversation proceeds, a growing history of past exchanges finds it way automatically to the prompt that the LLM receives:
In [9]:
Copied!
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
In [10]:
Copied!
conversation.predict(input="Hello, how can I roast an apple?")
conversation.predict(input="Hello, how can I roast an apple?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: > Finished chain.
Out[10]:
' Hi there! Roasting an apple is a great way to bring out the flavor of the fruit. To roast an apple, preheat your oven to 375 degrees Fahrenheit. Cut the apple into thin slices, and spread them on a greased baking sheet. Sprinkle the apple slices with cinnamon, nutmeg, and brown sugar. Bake the apples in the oven for about 25 minutes, or until the edges are golden brown. Enjoy!'
In [11]:
Copied!
conversation.predict(input="Can I do it on a bonfire?")
conversation.predict(input="Can I do it on a bonfire?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a great way to bring out the flavor of the fruit. To roast an apple, preheat your oven to 375 degrees Fahrenheit. Cut the apple into thin slices, and spread them on a greased baking sheet. Sprinkle the apple slices with cinnamon, nutmeg, and brown sugar. Bake the apples in the oven for about 25 minutes, or until the edges are golden brown. Enjoy! Human: Can I do it on a bonfire? AI: > Finished chain.
Out[11]:
" Yes, you can definitely roast an apple over a bonfire! Start by spearing the apple onto a roasting stick. Hold the stick a few inches above the fire, and turn the apple every few minutes until it's evenly cooked. Once the apple looks and smells done, you can take it off the fire and enjoy!"
In [12]:
Copied!
conversation.predict(input="What about a microwave, would the apple taste good?")
conversation.predict(input="What about a microwave, would the apple taste good?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a great way to bring out the flavor of the fruit. To roast an apple, preheat your oven to 375 degrees Fahrenheit. Cut the apple into thin slices, and spread them on a greased baking sheet. Sprinkle the apple slices with cinnamon, nutmeg, and brown sugar. Bake the apples in the oven for about 25 minutes, or until the edges are golden brown. Enjoy! Human: Can I do it on a bonfire? AI: Yes, you can definitely roast an apple over a bonfire! Start by spearing the apple onto a roasting stick. Hold the stick a few inches above the fire, and turn the apple every few minutes until it's evenly cooked. Once the apple looks and smells done, you can take it off the fire and enjoy! Human: What about a microwave, would the apple taste good? AI: > Finished chain.
Out[12]:
' Unfortunately, microwaving an apple would not bring out the same flavor as roasting it in an oven or over a bonfire. The texture could also be affected. I would not recommend microwaving an apple.'
In [13]:
Copied!
message_history.messages
message_history.messages
Out[13]:
[HumanMessage(content='Hello, how can I roast an apple?', additional_kwargs={}, example=False), AIMessage(content=' Hi there! Roasting an apple is a great way to bring out the flavor of the fruit. To roast an apple, preheat your oven to 375 degrees Fahrenheit. Cut the apple into thin slices, and spread them on a greased baking sheet. Sprinkle the apple slices with cinnamon, nutmeg, and brown sugar. Bake the apples in the oven for about 25 minutes, or until the edges are golden brown. Enjoy!', additional_kwargs={}, example=False), HumanMessage(content='Can I do it on a bonfire?', additional_kwargs={}, example=False), AIMessage(content=" Yes, you can definitely roast an apple over a bonfire! Start by spearing the apple onto a roasting stick. Hold the stick a few inches above the fire, and turn the apple every few minutes until it's evenly cooked. Once the apple looks and smells done, you can take it off the fire and enjoy!", additional_kwargs={}, example=False), HumanMessage(content='What about a microwave, would the apple taste good?', additional_kwargs={}, example=False), AIMessage(content=' Unfortunately, microwaving an apple would not bring out the same flavor as roasting it in an oven or over a bonfire. The texture could also be affected. I would not recommend microwaving an apple.', additional_kwargs={}, example=False)]
Manually tinkering with the prompt¶
You can craft your own prompt (through a PromptTemplate
object) and still take advantage of the chat memory handling by LangChain:
In [14]:
Copied!
from langchain import LLMChain, PromptTemplate
from langchain import LLMChain, PromptTemplate
In [15]:
Copied!
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
In [16]:
Copied!
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=None,
keyspace=None,
)
f_message_history.clear()
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=None,
keyspace=None,
)
f_message_history.clear()
In [17]:
Copied!
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
In [18]:
Copied!
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
In [19]:
Copied!
llm_chain.predict(human_input="Tell me about springs")
llm_chain.predict(human_input="Tell me about springs")
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: > Finished chain.
Out[19]:
" Springs are a great time of year! The birds are singing, the flowers are blooming, and it's the perfect season for a good old fashioned bouncing around!"
In [20]:
Copied!
llm_chain.predict(human_input='Er ... I mean the other type actually.')
llm_chain.predict(human_input='Er ... I mean the other type actually.')
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: Springs are a great time of year! The birds are singing, the flowers are blooming, and it's the perfect season for a good old fashioned bouncing around! Human: Er ... I mean the other type actually. AI: > Finished chain.
Out[20]:
' Oh, you mean the metal kind? Well, you can never go wrong with a good spring! They provide the perfect amount of tension and support for a wide variety of applications.'