Getting started with Galileo Observe is really easy. It involves 3 steps:

1

Create a project

Go to your Galileo Console. Click on the big + icon on the top left, and follow the steps to create your Observe project.
2

Integrate Galileo in your code

Galileo Observe can integrate via Langchain callbacks, our Python Logger, or via RESTFul APIs.

3

Choose your Guardrail metrics

Turn on the metrics you want to monitor your system on, select from our Guardrail Metric store or register your own.


Authentication

To authenticate Galileo, perform the following steps

  1. Add your Galileo Console URL (GALILEO_CONSOLE_URL) to your environment variables.
import os

os.environ['GALILEO_CONSOLE_URL']="https://console.galileo.XYZ.com"
# Ask your Galileo ADMIN for the URL if you don't know
  1. Set your user name and password OR your API key (recommended) to your environment variables
import os

os.environ["GALILEO_USERNAME"]="Your Galileo registered email e.g. john@acme.com"
os.environ["GALILEO_PASSWORD"]="Your Galileo password"

# OR

os.environ["GALILEO_API_KEY"]="Your Galileo API key"

We recommend using an API key to authenticate. To create one, go to “API Keys” under your profile bubble.

Getting an API Key

To create an API key:

  1. Go to your Galileo Console settings

  2. Go to API Keys

  3. Select Create a new key


Logging via Python Logger

If you’re not using LangChain, you can use our Python Logger to log your data to Galileo.

First you can create your ObserveWorkflows object with your existing project.

from galileo_observe import ObserveWorkflows

observe_logger = ObserveWorkflows(project_name="my_first_project")

Next you can log your workflow.

from openai import OpenAI

client = OpenAI()

prompt = "Tell me a joke about Large Language Models"
model = "gpt-4o-mini"
temperature = 0.3

# Create your workflow to log to Galileo.
wf = observe_logger.add_workflow(input={"input": prompt}, name="CustomWorkflow")

# Initiate the chat call
chat_completion = client.chat.completions.create(
    model=model,
    messages=[{"role": "user", "content": prompt}],
    temperature=temperature,
)
output_message = chat_completion.choices[0].message


# Log your llm call step to Galileo.
wf.add_llm(
    input=[{"role": "user", "content": prompt}],
    output=output_message.model_dump(mode="json"),
    model=model,
    input_tokens=chat_completion.usage.prompt_tokens,
    output_tokens=chat_completion.usage.completion_tokens,
    total_tokens=chat_completion.usage.total_tokens,
    metadata={"env": "production"},
    name="ChatOpenAI",
)

# Conclude the workflow.
wf.conclude(output={"output": output_message.content})
# Log the workflow to Galileo.
observe_logger.upload_workflows()

Integrating with Langchain

We support integrating into both Python-based and Typescript-based Langchain systems:

Integrating into your Python-based Langchain application is the easiest and recommended route. You can just add GalileoObserveCallback(project_name="YOUR_PROJECT_NAME") to the callbacks of your chain invocation.

from galileo_observe import GalileoObserveCallback
from langchain.chat_models import ChatOpenAI

prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI()
chain = prompt | model

monitor_handler = GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
chain.invoke({'foo':'bears'},
            config(dict(callbacks=[monitor_handler])))

The GalileoObserveCallback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.

Logging through our REST APIs

If you are looking to log directly from the client (e.g. using Javascript), you can log directly through our APIs. The Monitor API endpoints can be found on the swagger docs for your environment. More instructions can be found here.


What’s next

Once you’ve integrated Galileo into your production app code, you can choose your Guardrail metrics.

Was this page helpful?