Getting Started

How to monitor your apps with Galileo Observe
Getting started with Galileo Observe is really easy. It involves 3 steps:
  1. 1.
    Creating a project
  2. 2.
    Integrating into your code
  3. 3.
    Choosing and setting up your Guardrail metrics

Creating a Project

The first step is creating a project. From your Galileo Console, click on the big + icon on the top left of the screen and follow the steps to create your Observe project.

Integrating into your code

Galileo Observe can integrate via Langchain callbacks, our Python Logger, or via RESTFul APIs.
To authenticate into Galileo, add your Galileo Console URL (GALILEO_CONSOLE_URL) to your environment variables. Additionally, either set your user name (GALILEO_USERNAME) and password (GALILEO_PASSWORD) or your API key (GALILEO_API_KEY). These will be used for authentication and to route traffic to the correct environment.
We recommend using an API key to authenticate. To create one, go to "API Keys" under your profile bubble.


Integrating into your Langchain application is the easiest and recommended route. You can just add MonitorHandler(project_name="YOUR_PROJECT_NAME") to the callbacks of your chain invocation.
from llm_monitor import MonitorHandler
from langchain.chat_models import ChatOpenAI
prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI()
chain = prompt | model
monitor_handler = MonitorHandler(project_name="YOUR_PROJECT_NAME")
The MonitorHandler logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.

Python Logger

If you're not using LangChain, you can use our Python Logger to log your data to Galileo. Our Logger automatically batches and uploads your data in the background, ensuring the performance and reliability of your application is not compromised.
After importing llm_monitor and instantiating a monitor object, call log_prompt with your prompt, model, and model settings before your response is generated. Then call log_completion with the output, tokens (if available), and any metadata or tags you want to add.
from llm_monitor import LLMMonitor
monitor = LLMMonitor(project_name="YOUR_PROJECT_NAME")
# Example Prompt, Model and Temperature
prompt = "Tell me a joke about Large Language Models"
model = "gpt-3.5-turbo"
temperature = 0.3
# Call log_prompt before response generation.
trace_id = monitor.log_prompt(
# Generic Chat Completion
chat_completion = openai.ChatCompletion.create(
messages=[{"role": "user", "content": prompt}],
# Call log_completion after response generation.
user_metadata={"env": "production"},
tags=["prod", "chat"]

Logging through our REST APIs

If you are looking to log directly from the client (e.g. using Javascript), you can log directly through our APIs. The Monitor API endpoints can be found on the swagger docs for your environment. More instructions can be found here.
Once you've integrated Galileo into your production app code, you can choose your Guardrail metrics.