Chain Sweeps

If you're building a multi-step workflow or chain (e.g. a RAG system, an Agent, or a chain) and want to experiment with multiple combinations of parameters or your versions at once, Chain Sweeps are your friend.

A Chain Sweep allows you to execute, in bulk, multiple chains or workflows iterating over different versions or parameters of your system.

First, you'll need to wrap your workflow or chain in a function. This function should take anything you want to experiment with as an argument (e.g. chunk size, embedding model, top_k).

import promptquality as pq

# Login to Galileo.

from langchain.docstore.document import Document
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings

documents = [Document(page_content=doc) for doc in source_documents]
questions = [...]

def rag_chain_executor(chunk_size: int, chunk_overlap: int, model_name: str) -> None:
    # Example of a RAG chain that uses the params in the function signature
    text_splitter = CharacterTextSplitter(
        chunk_size=chunk_size, chunk_overlap=chunk_overlap
    texts = text_splitter.split_documents(documents)
    embeddings = OpenAIEmbeddings(openai_api_key="<OPENAI_API_KEY>")
    db = FAISS.from_documents(texts, embeddings)
    retriever = db.as_retriever()
    model = ChatOpenAI(openai_api_key="<OPENAI_API_KEY>", model_name=model_name)
    qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)

    # Before running your chain, add the Galileo Prompt Callback on the invoke/run/batch step
    prompt_handler = pq.GalileoPromptCallback(
        scorers=[Scorers.sexist, Scorers.pii, Scorers.toxicity],
    for question in questions:
        result = qa.invoke(
            {"question": question, "chat_history": []},
    # Call .finish() on your callback to upload your results to Galileo

Finally, call pq.sweep() with your chain's wrapper function and a dict containing all the different params you'd like to run your chain over:

        "chunk_size": [50, 100, 200],
        "chunk_overlap": [0, 25, 50],
        "model_name": ["gpt-3.5-turbo", "gpt-3.5-turbo-instruct", "gpt-4-0125-preview"],

See the PromptQuality Python Library Docs for the function docstrings.

Last updated