Quickstart
How to monitor your apps with Galileo Observe
Getting started with Galileo Observe is really easy. It involves 3 steps:
Create a project
Integrate Galileo in your code
Galileo Observe can integrate via Langchain callbacks, our Python Logger, or via RESTFul APIs.
Choose your Guardrail metrics
Turn on the metrics you want to monitor your system on, select from our Guardrail Metric store or register your own.
Authentication
To authenticate Galileo, perform the following steps
- Add your Galileo Console URL (
GALILEO_CONSOLE_URL
) to your environment variables.
import os
os.environ['GALILEO_CONSOLE_URL']="https://console.galileo.XYZ.com"
# Ask your Galileo ADMIN for the URL if you don't know
- Set your user name and password OR your API key (recommended) to your environment variables
import os
os.environ["GALILEO_USERNAME"]="Your Galileo registered email e.g. john@acme.com"
os.environ["GALILEO_PASSWORD"]="Your Galileo password"
# OR
os.environ["GALILEO_API_KEY"]="Your Galileo API key"
We recommend using an API key to authenticate. To create one, go to “API Keys” under your profile bubble.
Getting an API Key
To create an API key:
-
Go to your Galileo Console settings
-
Go to API Keys
-
Select Create a new key
Logging via Python Logger
If you’re not using LangChain, you can use our Python Logger to log your data to Galileo.
First you can create your ObserveWorkflows object with your existing project.
from galileo_observe import ObserveWorkflows
observe_logger = ObserveWorkflows(project_name="my_first_project")
Next you can log your workflow.
from openai import OpenAI
client = OpenAI()
prompt = "Tell me a joke about Large Language Models"
model = "gpt-4o-mini"
temperature = 0.3
# Create your workflow to log to Galileo.
wf = observe_logger.add_workflow(input={"input": prompt}, name="CustomWorkflow")
# Initiate the chat call
chat_completion = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=temperature,
)
output_message = chat_completion.choices[0].message
# Log your llm call step to Galileo.
wf.add_llm(
input=[{"role": "user", "content": prompt}],
output=output_message.model_dump(mode="json"),
model=model,
input_tokens=chat_completion.usage.prompt_tokens,
output_tokens=chat_completion.usage.completion_tokens,
total_tokens=chat_completion.usage.total_tokens,
metadata={"env": "production"},
name="ChatOpenAI",
)
# Conclude the workflow.
wf.conclude(output={"output": output_message.content})
# Log the workflow to Galileo.
observe_logger.upload_workflows()
Integrating with Langchain
We support integrating into both Python-based and Typescript-based Langchain systems:
Integrating into your Python-based Langchain application is the easiest and recommended route. You can just add GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
to the callbacks
of your chain invocation.
from galileo_observe import GalileoObserveCallback
from langchain.chat_models import ChatOpenAI
prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI()
chain = prompt | model
monitor_handler = GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")
chain.invoke({'foo':'bears'},
config(dict(callbacks=[monitor_handler])))
The GalileoObserveCallback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.
Logging through our REST APIs
If you are looking to log directly from the client (e.g. using Javascript), you can log directly through our APIs. The Monitor API endpoints can be found on the swagger docs for your environment. More instructions can be found here.
What’s next
Once you’ve integrated Galileo into your production app code, you can choose your Guardrail metrics.