Vector Similarity Search QA Quickstart¶
Set up a simple Question-Answering system with LangChain and CassIO, using Cassandra / Astra DB as the Vector Database.
NOTE: this uses Cassandra's "Vector Similarity Search" capability. Make sure you are connecting to a vector-enabled database for this demo.
from langchain.indexes import VectorstoreIndexCreator
from langchain.text_splitter import (
CharacterTextSplitter,
RecursiveCharacterTextSplitter,
)
from langchain.docstore.document import Document
from langchain.document_loaders import TextLoader
The following line imports the Cassandra flavor of a LangChain vector store:
from langchain.vectorstores.cassandra import Cassandra
A database connection is needed. (If on a Colab, the only supported option is the cloud service Astra DB.)
# Ensure loading of database credentials into environment variables:
import os
from dotenv import load_dotenv
load_dotenv("../../../.env")
import cassio
Select your choice of database by editing this cell, if needed:
database_mode = "cassandra" # "cassandra" / "astra_db"
if database_mode == "astra_db":
cassio.init(
database_id=os.environ["ASTRA_DB_ID"],
token=os.environ["ASTRA_DB_APPLICATION_TOKEN"],
keyspace=os.environ.get("ASTRA_DB_KEYSPACE"), # this is optional
)
if database_mode == "cassandra":
from cqlsession import getCassandraCQLSession, getCassandraCQLKeyspace
cassio.init(
session=getCassandraCQLSession(),
keyspace=getCassandraCQLKeyspace(),
)
Both an LLM and an embedding function are required.
Below is the logic to instantiate the LLM and embeddings of choice. We chose to leave it in the notebooks for clarity.
import os
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI', 'Azure_OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
from langchain.embeddings import VertexAIEmbeddings
llm = VertexAI()
myEmbedding = VertexAIEmbeddings()
print('LLM+embeddings from Vertex AI')
elif llmProvider == 'OpenAI':
os.environ['OPENAI_API_TYPE'] = 'open_ai'
from langchain.llms import OpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = OpenAI(temperature=0)
myEmbedding = OpenAIEmbeddings()
print('LLM+embeddings from OpenAI')
elif llmProvider == 'Azure_OpenAI':
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = os.environ['AZURE_OPENAI_API_VERSION']
os.environ['OPENAI_API_BASE'] = os.environ['AZURE_OPENAI_API_BASE']
os.environ['OPENAI_API_KEY'] = os.environ['AZURE_OPENAI_API_KEY']
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
llm = AzureOpenAI(temperature=0, model_name=os.environ['AZURE_OPENAI_LLM_MODEL'],
engine=os.environ['AZURE_OPENAI_LLM_DEPLOYMENT'])
myEmbedding = OpenAIEmbeddings(model=os.environ['AZURE_OPENAI_EMBEDDINGS_MODEL'],
deployment=os.environ['AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT'])
print('LLM+embeddings from Azure OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM+embeddings from OpenAI
A minimal example¶
The following is a minimal usage of the Cassandra vector store. The store is created and filled at once, and is then queried to retrieve relevant parts of the indexed text, which are then stuffed into a prompt finally used to answer a question.
The following creates an "index creator", which knows about the type of vector store, the embedding to use and how to preprocess the input text:
(Note: stores built with different embedding functions will need different tables. This is why we append the llmProvider
name to the table name in the next cell.)
table_name = 'vs_test1_' + llmProvider
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Cassandra,
embedding=myEmbedding,
text_splitter=CharacterTextSplitter(
chunk_size=400,
chunk_overlap=0,
),
vectorstore_kwargs={
'session': None,
'keyspace': None,
'table_name': table_name,
},
)
Loading a local text (a short story by E. A. Poe will do)
loader = TextLoader('texts/amontillado.txt', encoding='utf8')
This takes a few seconds to run, as it must calculate embedding vectors for a number of chunks of the input text:
# Note: Certain LLM providers need workaround to evaluate batch embeddings
# (as done in next cell).
# As of 2023-06-29, Azure OpenAI would error with:
# "InvalidRequestError: Too many inputs. The max number of inputs is 1"
if llmProvider == 'Azure_OpenAI':
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
docs = loader.load()
subdocs = index_creator.text_splitter.split_documents(docs)
#
print(f'subdocument {0} ...', end=' ')
vs = index_creator.vectorstore_cls.from_documents(
subdocs[:1],
index_creator.embedding,
**index_creator.vectorstore_kwargs,
)
print('done.')
for sdi, sd in enumerate(subdocs[1:]):
print(f'subdocument {sdi+1} ...', end=' ')
vs.add_texts(texts=[sd.page_content], metadata=[sd.metadata])
print('done.')
#
index = VectorStoreIndexWrapper(vectorstore=vs)
if llmProvider != 'Azure_OpenAI':
index = index_creator.from_loaders([loader])
Check what's on DB¶
By way of demonstration, if you were to directly read the rows stored in your database table, this is what you would now find there (not that you'll ever have to, for LangChain and CassIO provide an abstraction on top of that):
c_session = cassio.config.resolve_session()
c_keyspace = cassio.config.resolve_keyspace()
cqlSelect = f'SELECT * FROM {c_keyspace}.{table_name} LIMIT 3;' # (Not a production-optimized query ...)
rows = c_session.execute(cqlSelect)
for row_i, row in enumerate(rows):
print(f'\nRow {row_i}:')
# depending on the cassIO version, the underlying Cassandra table can have different structure ...
try:
# you are using the new cassIO 0.1.0+ : congratulations :)
print(f' row_id: {row.row_id}')
print(f' vector: {str(row.vector)[:64]} ...')
print(f' body_blob: {row.body_blob[:64]} ...')
print(f' metadata_s: {row.metadata_s}')
except AttributeError:
# Please upgrade your cassIO to the latest version ...
print(f' document_id: {row.document_id}')
print(f' embedding_vector: {str(row.embedding_vector)[:64]} ...')
print(f' document: {row.document[:64]} ...')
print(f' metadata_blob: {row.metadata_blob}')
print('\n...')
Row 0: row_id: c38f50eb434e450ca6c8a8b2b582ca0d vector: [-0.009224753826856613, -0.01538837980479002, 0.0158158354461193 ... body_blob: He raised it to his lips with a leer. He paused and nodded to m ... metadata_s: {'source': 'texts/amontillado.txt'} Row 1: row_id: bc4ff32a7f854eab89a9227152b2fdea vector: [-0.0034670669119805098, -0.01440946850925684, 0.027317104861140 ... body_blob: He had a weak point--this Fortunato--although in other regards h ... metadata_s: {'source': 'texts/amontillado.txt'} Row 2: row_id: 29cb41c42bd94dfc936e4f3e68b73f42 vector: [-0.01560208573937416, -0.0050873467698693275, 0.020469402894377 ... body_blob: I said to him--"My dear Fortunato, you are luckily met. How rem ... metadata_s: {'source': 'texts/amontillado.txt'} ...
Ask a question, get an answer¶
query = "Who is Luchesi?"
index.query(query, llm=llm)
' Luchesi is a connoisseur of wine who Fortunato believes can tell Amontillado from Sherry.'
Spawning a "retriever" from the index¶
You just saw how easily you can plug a Cassandra-backed Vector Index into a full question-answering LangChain pipeline.
But you can as easily work at a slightly lower level: the following code spawns a VectorStoreRetriever
from the index for manual retrieval of documents related to a given query text. The results are instances of LangChain's Document
class.
retriever = index.vectorstore.as_retriever(search_kwargs={
'k': 2,
})
retriever.get_relevant_documents(
"Check the motto of the Montresors"
)
[Document(page_content='"A huge human foot d\'or, in a field azure; the foot crushes a serpent\nrampant whose fangs are imbedded in the heel."\n\n"And the motto?"\n\n"_Nemo me impune lacessit_."\n\n"Good!" he said.', metadata={'source': 'texts/amontillado.txt'}), Document(page_content='He raised it to his lips with a leer. He paused and nodded to me\nfamiliarly, while his bells jingled.\n\n"I drink," he said, "to the buried that repose around us."\n\n"And I to your long life."\n\nHe again took my arm, and we proceeded.\n\n"These vaults," he said, "are extensive."\n\n"The Montresors," I replied, "were a great and numerous family."\n\n"I forget your arms."', metadata={'source': 'texts/amontillado.txt'})]