Getting Started
How to monitor your apps with Galileo Monitor
Getting started with LLM Monitor is really easy. It involves 3 steps:
- 1.Creating a project
- 2.Integrating into your code
- 3.Choosing and setting up your Guardrail metrics
The first step is creating a project. From your Galileo Console, click on the big + icon on the top left of the screen and follow the steps to create your Monitoring project.
LLM Monitor can integrate via Langchain callbacks, through our Python Logger or through RESTFul APIs.
Integrating into your Langchain application is the easiest and recommended route. First, add your Galileo Console URL (
GALILEO_CONSOLE_URL
), user name (GALILEO_USERNAME
) and password (GALILEO_PASSWORD
) to your environment variables. These will be used for authentication and to route traffic to the correct environment.Then you can add
MonitorHandler(project_name="YOUR_PROJECT_NAME")
to your callbacks.
from llm_monitor import MonitorHandler
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(callbacks=[MonitorHandler(project_name="YOUR_PROJECT_NAME")])
The MonitorHandler logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.
If you're not using LangChain, you can use our Python Logger to log your data to Galileo. Our Logger automatically batches and uploads your data in the background, ensuring the performance and reliability of your application is not compromised.
First, add your Galileo Console URL (
GALILEO_CONSOLE_URL
), user name (GALILEO_USERNAME
) and password (GALILEOO_PASSWORD
) to your environment variables. These will be used for authentication and to route traffic to the correct environment. After importing llm_monitor and instantiating a monitor object, call
log_prompt
with your prompt, model, and model settings before your response is generated. Then call log_completion
with the output, tokens (if available), and any metadata or tags you want to add.
from llm_monitor import LLMMonitor
monitor = LLMMonitor(project_name="YOUR_PROJECT_NAME")
# Example Prompt, Model and Temperature
prompt = "Tell me a joke about Large Language Models"
model = "gpt-3.5-turbo"
temperature = 0.3
# Call log_prompt before response generation.
trace_id = monitor.log_prompt(
prompt=prompt,
model=model,
temperature=temperature,
)
# Generic Chat Completion
chat_completion = openai.ChatCompletion.create(
model=model,
messages=[{"role": "user", "content": prompt}],
temperature=temperature,
)
# Call log_completion after response generation.
monitor.log_completion(
trace_id=trace_id,
output_text=chat_completion["choices"][0]["message"]["content"],
num_input_tokens=chat_completion["usage"]["prompt_tokens"],
num_output_tokens=chat_completion["usage"]["completion_tokens"],
num_total_tokens=chat_completion["usage"]["total_tokens"],
finish_reason=chat_completion["choices"][0]["finish_reason"],
user_metadata={"env": "production"},
tags=["prod", "chat"]
)
If you are looking to log directly from the client (e.g. using Javascript), you can log directly through our APIs. The Monitor API endpoints can be found on the swagger docs for your environment. More instructions can be found here.
Once you've integrated Galileo into your production app code, you can choose your Guardrail metrics.
Last modified 1mo ago