Examples with LangChain

Let's explore an example of integrating promptquality with a LangChain chain

This example is pulled from Langchain Docs and most of the code is just LangChain implementation of a simple chain.

If you are using Vertex AI through langchain, concurrent requests to Vertex AI LLMs will fail to compute node outputs. Use one worker for best results.

Creating a simple chain with LangChain

First let's build the components of our chain

we want to ask a chat model a question about hallucinations, but we want to give it the context to answer correctly, so naturally we set up a vector db and use RAG. In this case we'll get the context from a Galileo blog post

from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from typing import List
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema.document import Document

# Load text from webpage
loader = WebBaseLoader("https://www.rungalileo.io/blog/deep-dive-into-llm-hallucinations-across-generative-tasks")
data = loader.load()

# Split text into documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
splits = text_splitter.split_documents(data)

# Add text to vector db
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents(documents=splits, embedding=embedding)

# Create a retriever
retriever = vectordb.as_retriever()

Now we have the retriever, we can build our chain. The chain will

  1. Take in a question.

  2. Feed that question to our retriever for some context.

  3. Fill out the prompt with the question and context.

  4. Feed the prompt to a chat model.

  5. output the answer from the model.

def format_docs(docs: List[Document]) -> str:
    return "\n\n".join([d.page_content for d in docs])

template = """Answer the question based only on the following context:

    {context}

    Question: {question}
    """
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()

chain = {"context": retriever | format_docs, "question": RunnablePassthrough()} | prompt | model | StrOutputParser()

Integrating our chain with promptquality

Now all we have to do to integrate with promptquality, is to add our callback. In just 3 lines of code we can integrate promptquality into any existing LangChain experiments

import promptquality as pq

# Create callback handler
prompt_handler = pq.GalileoPromptCallback(
    scorers=[pq.Scorers.latency, pq.Scorers.groundedness, pq.Scorers.factuality]
)

# Run your chain experiments across multiple inputs with the galileo callback
inputs = [
    "What are hallucinations?",
    "What are intrinsic hallucinations?",
    "What are extrinsic hallucinations?"
]
chain.batch(inputs, config=dict(callbacks=[prompt_handler]))

# publish the results of your run
prompt_handler.finish()

Adding Tools and Agents

More complex chains including LangChain Tools and Agents, also integrate well with Galileo Evaluate.

Here's another example pulled from the LangChain Docs.

Creating the tool from our retriever

First we can use our retriever, created above, and convert it to a tool.

from langchain.tools.retriever import create_retriever_tool

# Create retreiver tool
retriever_tool = create_retriever_tool(
    retriever,
    "hallucination_search",
    "Search for information about hallucinations. Use this tool for any questions about hallucinations",
)

Now let's create a ReAct Agent that has access to this tool.

from langchain_openai import OpenAI
from langchain import hub
from langchain.agents import create_react_agent
from langchain.agents import AgentExecutor

llm = OpenAI()
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, [retriever_tool], prompt)
agent_executor = AgentExecutor(agent=agent, tools=[retriever_tool])

Now that we have that, we're ready to integrate with promptquality, the exact same way as above.

import promptquality as pq

# Create callback handler
prompt_handler = pq.GalileoPromptCallback(
    scorers=[pq.Scorers.latency, pq.Scorers.groundedness, pq.Scorers.factuality]
)

# Run your chain experiments across multiple inputs with the galileo callback
inputs = [
    {"input": "What are hallucinations?"},
    {"input": "What are intrinsic hallucinations?"},
    {"input": "What are extrinsic hallucinations?"}
]
agent_executor.batch(inputs, config=dict(callbacks=[prompt_handler]))

# publish the results of your run
prompt_handler.finish()

Last updated