Quantcast
Channel: Janakiram MSV, Author at The New Stack
Viewing all articles
Browse latest Browse all 36

How to Build a Q&A LLM Application with LangChain and Gemini

$
0
0
Q&A

In this tutorial, we will explore the integration of LangChain, a programming framework for using large language models (LLMs) in applications, with Google’s Gemini LLM to build a Q&A application based on a PDF.

Before going ahead with the tutorial, make sure you have an API key from Google AI Studio.

Step 1: Initializing the Environment

Create a Python virtual environment and install the required modules from the requirements.txt file.

python -m venv venv
source venv/bin/activate


Create the requirements.txt file with the below content:

pypdf2
chromadb
google.generativeai
langchain-google-genai
langchain
langchain_community
jupyter


pip install -r requirements.txt


Set an environment variable to access the API key implicitly from the code.

export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"


Launch Jupyter Notebook and get started with the code.

Step 2: Import Modules

from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_google_genai import GoogleGenerativeAIEmbeddings
from langchain.prompts import PromptTemplate
from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import CharacterTextSplitter
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
from langchain.vectorstores import Chroma
import os

Step 3: Initialize the Models

This step loads the LLM with the Embeddings models responsible for converting text into lower-dimension vectors.

llm = ChatGoogleGenerativeAI(model="gemini-pro")
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")

Step 4: Load and Split the PDF

You can use any PDF of your choice. But for this tutorial, we will load the employee handbook of a fictitious company.

The code below loads the PDF and splits it into chunks of 250 characters, with an overlap of 50 characters between each chunk.

loader = PyPDFLoader("handbook.pdf")

text_splitter = CharacterTextSplitter(
    separator=".",
    chunk_size=250,
    chunk_overlap=50,
    length_function=len,
    is_separator_regex=False,
)

pages = loader.load_and_split(text_splitter)

Step 5: Initialize VectorDB and Configure Retriever

In the next step, we will convert each chunk of text into embeddings and store them in the Chroma vector database for retrieval. The parameter search_kwargs={"k": 5} defines the top matches to retrieve from a search.

vectordb=Chroma.from_documents(pages,embeddings)
retriever = vectordb.as_retriever(search_kwargs={"k": 5})

Step 6: Define the Retrieval Chain

In LangChain, each component is assembled together to form a logical chain. In this scenario, we have the prompt, the LLM, and the retriever as the key components. Let’s create a chain from them.

template = """
You are a helpful AI assistant.
Answer based on the context provided. 
context: {context}
input: {input}
answer:
"""
prompt = PromptTemplate.from_template(template)
combine_docs_chain = create_stuff_documents_chain(llm, prompt)
retrieval_chain = create_retrieval_chain(retriever, combine_docs_chain)

Step 7: Invoke the Chain

Now is the time to test if all the pieces of the puzzle are connected. We will invoke the chain and check the response.

response=retrieval_chain.invoke({"input":"How do I apply for personal leave?"})
print(response)


As we can see, the answer key has the expected response from the chain. Let’s print it out.

response["answer"]


Below is the complete code:

View the code on Gist.

The post How to Build a Q&A LLM Application with LangChain and Gemini appeared first on The New Stack.

We show you how to integrate LangChain with Google's Gemini LLM to build a Q&A application based on a PDF.

Viewing all articles
Browse latest Browse all 36

Latest Images

Trending Articles





Latest Images