Galileo
Search
K

Example code - Monitor App

Example app that asks trivia questions to an LLM. We hook up the Galileo Monitoring tool with this app in 1 line of code.
You can use a Jupyter Notebook, VSCode, or any other python environment for this.

In your python environment, pip install 3 libraries using the command below:

pip install llm-monitor openai langchain

Set 5 variables in your python IDE

YOUR_GALILEO_CONSOLE_URL = "YOUR_GALILEO_CONSOLE_URL_GOES_HERE"
GALILEO_USERNAME = "YOUR_GALILEO_USERNAME_GOES_HERE"
GALILEO_PASSWORD = "YOUR_GALILEO_PASSWORD_GOES_HERE"
YOUR_OPEN_AI_KEY = "YOUR_OPEN_AI_KEY_GOES_HERE"
YOUR_PROJECT_NAME = "my_demo_monitor_project"

Go to the Galileo console create a new project

  1. 1.
    Hit "New Project" (top left)
  1. 2.
    Give your project a nice name like "my_demo_monitor_project" as shown below
  1. 3.
    Choose "LLM Monitoring" as the Task Type

Copy and paste the Monitoring app code below

from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from llm_monitor import MonitorHandler
from datetime import datetime
import time
import random
import os
os.environ["GALILEO_CONSOLE_URL"] = YOUR_GALILEO_CONSOLE_URL
os.environ["OPENAI_API_KEY"] = YOUR_OPEN_AI_KEY
os.environ["GALILEO_USERNAME"] = GALILEO_USERNAME
os.environ["GALILEO_PASSWORD"] = GALILEO_PASSWORD
class MonitoringApp:
def run_llm(self, llm, user_prompt):
prompt = PromptTemplate.from_template("Answer the following question: {user_prompt}")
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run(user_prompt=user_prompt)
app = MonitoringApp()
llm = ChatOpenAI(
temperature=0,
callbacks=[MonitorHandler(project_name=YOUR_PROJECT_NAME)],
)
questions = [
"Why is the sky blue?",
"How do magnets work?",
"Why do apples fall from trees?",
"What is gravity?",
"How does the moon affect tides?",
"Why is the ocean salty?",
"What causes thunder?",
"How do plants make food?",
"Why do we have seasons?",
"How do rainbows form?"
]
while True:
question = random.choice(questions)
print(f"Querying the LLM: {question}")
app.run_llm(llm, question)
time.sleep(5)