Galileo
Search
K

Logging Data via RESTful APIs

While we recommend integrating via Galileo's Langchain integration or our Python Logger, you can always log data via Galileo's RESTful APIs.

Why should I use the RESTful APIs?

Use them if:
  • You don't use Python (e.g. Javascript, Typescript, etc)
  • You're looking to integrate Galileo into your custom in-house prompt engineering tool
Don't use them if:
Logging data via our RESTful APIs is a two-step process:
  1. 1.
    Authentication
  2. 2.
    Logging

Authentication

To fetch an authentication, send a POST request to /login with your username and password:
import requests
base_url = YOUR_BASE_URL #see below for instructions to get your base_url
headers = {
'accept': 'application/json',
'Content-Type': 'application/x-www-form-urlencoded',
}
data = {
'username': '{YOUR_USERNAME}',
'password': '{YOUR_PASSWORD}',
}
response = requests.post(f'{base_url}/login', headers=headers, data=data)
access_token = response.json()["access_token"]
Reach out to us if you don't know your 'base_url'. For most users, this is the same as their console URL except with the word 'console' replaced by 'api' (e.g. http://www.console.galileo.myenterprise.com -> http://www.api.galileo.myenterprise.com)

Logging

Once you have your auth token, you can start making ingestion calls to Galileo Observe.

Project ID

To log data, you'll need your project ID. Get your project ID by making a GET request to the /projects endpoint, or simply copy it from the URL in your browser window. This project ID is static and will never change. You only have to do this once.
headers = {
'accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': f"Bearer {access_token}"}
response = requests.get(f"{base_url}/projects", headers=headers,
params={"project_name": "{YOUR_PROJECT_NAME}"}
)
project_id = response.json()[0]["id"]

Structuring your records

Create an array of all the LLM calls you want to track. You can fire off individual requests or create batches. For each LLM call, create a dictionary with the following information:
{
"records": [
{
"latency_ms": 894, #Integer
"status_code": 200, #Integer
"input_text": "This is a prompt.", #String
"output_text": "This is a response.", #String
"node_type":"llm", #String
"model": "gpt-3.5-turbo", #String
"num_input_tokens": 7, #Integer
"num_output_tokens": 8, #Integer
"output_logprobs": {/* Optional. When available, logprobs are used to compute Uncertainty. */ },
"created_at": "2023-08-07T15:14:30.519922" #timestamp constructed in "%Y-%m-%dT%H:%M:%S" format
}
]
}

Chains, Agents, and other Multi-Step Workflows

Galileo allows you to trace your Chains, Agents, RAG, and other multi-step workflow executions. To do that create one record per step or node in your system. Set the node_type field accordingly. Its supported values are "llm", "chat", "chain", "tool", "agent", and "retriever".
Set the node_id, chain_id and chain_root_id fields to set the hierarchy. This will enable you to trace your executions.
  • node_id should be unique randomly generated IDs (e.g. using uuid()).
  • chain_id is the node_id of your node's parent (i.e. the node that called it)
  • chain_root_id is the node_id of the overall executor node (the root of the chain or workflow)
Imagine the following system architecture:
├── Agent
│ └── Chain
│ ├── Retriever
│ └── LLM
The node_id of the "Agent" node will be every node's chain_root_id. For the "Chain" node, it'll also be its chain_id. For the Retriever and LLM nodes, the "Chain" Node node_id will be its chain_id.

Retriever Records

If you're logging a Retriever step in your RAG application, make sure to set the node_type to "retriever" and encode all your documents/chunks into output_text (e.g. json.dumps(docs)).
Additionally, don't forget to include your node_id, chain_id and chain_root_id so that your executions can be tied together (see example below).
import uuid
{
"records": [
{
"node_id": str(uuid.uuid4()) #node_id is unique for each node in the chain
"chain_id": #chain_id is the node_id of the parent node
"chain_root_id": #chain_root_id is the node_id of the chain node. It should be the same for all nodes in the same chain
"latency_ms": 894, #Integer
"status_code": 200, #Integer
"input_text": "This is a prompt.", #String
"output_text": "This is a response.", #String
"node_type":"llm", #String
"model": "gpt-3.5-turbo", #String
"num_input_tokens": 7, #Integer
"num_output_tokens": 8, #Integer
"output_logprobs": {/* Optional. When available, logprobs are used to compute Uncertainty. */ },
"created_at": "2023-08-07T15:14:30.519922" #timestamp constructed in "%Y-%m-%dT%H:%M:%S" format
}
]
}

Logging your records

Finally, make a POST request to the llm_monitor/ingest endpoint with your records:
headers = {
'accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': f"Bearer {access_token}"}
response = requests.post(f"{base_url}/projects/{project_id}/llm_monitor/ingest",
headers=headers,
json={"records": records})
Once you start sending requests, you'll start seeing your requests and responses on your Galileo Observe Dashboard.