Example code - Monitor App

Example app that asks trivia questions to an LLM. We hook up the Galileo Observe tool with this app in 1 line of code.

You can use a Jupyter Notebook, VSCode, or any other python environment for this.

In your python environment, pip install 3 libraries using the command below:

pip install galileo-observe openai langchain

Set 5 variables in your python IDE

YOUR_GALILEO_CONSOLE_URL = "YOUR_GALILEO_CONSOLE_URL_GOES_HERE"
YOUR_GALILEO_API_KEY = "YOUR_GALILE_API_KEY_GOES_HERE"
YOUR_OPEN_AI_KEY = "YOUR_OPEN_AI_KEY_GOES_HERE"
YOUR_PROJECT_NAME = "my_demo_monitor_project"

Go to the Galileo console create a new project

  1. Hit "New Project" (top left)

  1. Give your project a nice name like "my_demo_monitor_project" as shown below

  1. Choose "Observe" as the Task Type

Copy and paste the Monitoring app code below

from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from galileo_observe import GalileoObserveCallback
from datetime import datetime

import time
import random

import os

os.environ["GALILEO_CONSOLE_URL"] = YOUR_GALILEO_CONSOLE_URL
os.environ["OPENAI_API_KEY"] = YOUR_OPEN_AI_KEY
os.environ["GALILEO_API_KEY"] = YOUR_GALILEO_API_KEY

class MonitoringApp:

    def run_llm(self, llm, user_prompt):
        prompt = PromptTemplate.from_template("Answer the following question: {user_prompt}")
        chain = LLMChain(llm=llm, prompt=prompt)
        result = chain.run(user_prompt=user_prompt)

app = MonitoringApp()

llm = ChatOpenAI(
    temperature=0,
    callbacks=[GalileoObserveCallback(project_name=YOUR_PROJECT_NAME)],
)

questions = [
    "Why is the sky blue?",
    "How do magnets work?",
    "Why do apples fall from trees?",
    "What is gravity?",
    "How does the moon affect tides?",
    "Why is the ocean salty?",
    "What causes thunder?",
    "How do plants make food?",
    "Why do we have seasons?",
    "How do rainbows form?"
]

while True:
    question = random.choice(questions)
    print(f"Querying the LLM: {question}")
    app.run_llm(llm, question)
    time.sleep(5)

Last updated