Adding Custom LLM APIs / Fine Tuned LLMs

Showcases how to use Galileo with any LLM API or custom fine-tuned LLMs, not supported out-of-the-box by Galileo.

Galileo comes pre-configured with dozens of LLM integrations across various platforms including OpenAI, Azure OpenAI, Sagemaker, and Bedrock.

However, if you're using an LLM service or custom model that Galileo doesn't have support for, you can still get all that Galileo has to offer by simply using custom loggers.

In this guide, we showcase how to leverage Anthropic's claude-3-sonnet LLM without Galileo, and then use Galileo to do deep evaluations and analysis.

First, install the required libraries. In this example - Galileo, Anthropic, and Langchain.

pip install --upgrade promptquality langchain langchain-anthropic

Here's a simple code snippet showing you how to query any LLM of your choice (in this case we're going with an Anthropic LLM) and log your results to Galileo.

import os
import promptquality as pq
from promptquality import NodeType, NodeRow
from langchain_anthropic import ChatAnthropic
from datetime import datetime
from uuid import uuid4

os.environ['GALILEO_CONSOLE_URL'] = "https://your.galileo.console.url"
os.environ["ANTHROPIC_API_KEY"] = "Your Anthropic Key"

MY_PROJECT_NAME = "my-custom-logging-project"
MY_RUN_NAME = f'custom-logging-{"%b %d %Y %H_%M_%S")}'

config = pq.login(os.environ['GALILEO_CONSOLE_URL'])

chat_model = ChatAnthropic(model="claude-3-sonnet-20240229")

query = "Tell me a joke about bears!"
response = chat_model.invoke(query)

root_id = uuid4()

root_node = NodeRow(


You should see a result like shown below:

๐Ÿ‘‹ You have logged into ๐Ÿ”ญ Galileo (https://your.galileo.console.url/) as
Processing complete!
Initial job complete, executing scorers asynchronously. Current status:
cost: Computing ๐Ÿšง
toxicity: Computing ๐Ÿšง
pii: Computing ๐Ÿšง
latency: Done โœ…
groundedness: Computing ๐Ÿšง
๐Ÿ”ญ View your prompt run on the Galileo console at: https://your.galileo.console.url/foo/bar/

Example Notebook


Last updated