Integrating into your Python-based Langchain application is the easiest and recommended route. You can just add GalileoObserveCallback(project_name="YOUR_PROJECT_NAME") to the callbacks of your chain invocation.
from galileo_observe import GalileoObserveCallbackfrom langchain.chat_models import ChatOpenAIprompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")model =ChatOpenAI()chain = prompt | modelmonitor_handler =GalileoObserveCallback(project_name="YOUR_PROJECT_NAME")chain.invoke({'foo':'bears'}, config(dict(callbacks=[monitor_handler])))
The GalileoObserveCallback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.
Integrating into your Typescript-based Langchain application is a very simple process. You can just add aGalileoObserveCallback object to the callbacks of your chain invocation.
Add the callback {callbacks: [observe_callback]} in the invoke step of your application:
constresult=awaitchain.invoke( { question:"What is the powerhouse of the cell?"}, {callbacks: [observe_callback]});
The GalileoObserveCallback callback logs your input, output, and relevant statistics back to Galileo, where additional evaluation metrics are computed.
Non-Langchain: Custom Logging via Python Logger
If you're not using LangChain, you can use our Python Logger to log your data to Galileo. The Logger automatically batches and uploads data in the background asynchronously, ensuring the performance and reliability of your application is not compromised.
Use the log_node_start() and log_node_completion() to log your prompt, model, hyperparameters and response respectively, as shown below:
import jsonfrom openai import OpenAIfrom galileo_observe import GalileoObservefrom galileo_observe.schema.transaction import TransactionRecordTypeclient =OpenAI()observe =GalileoObserve(project_name="llm_monitor_test_1", version='manual_app_v1')prompt ="Tell me a joke about Large Language Models"model ="gpt-3.5-turbo"temperature =0.3# Start a new chainchain_id = observe.log_node_start( node_type=TransactionRecordType.chain, input_text=json.dumps({"input": prompt}), constructor="CustomChain",)# Start a chat tracenode_id = observe.log_node_start( node_type=TransactionRecordType.chat, chain_id=chain_id, input_text=prompt, model=model, temperature=temperature, constructor="ChatOpenAI", user_metadata={"env": "production"}, tags=["prod", "chat"],)# Initiate the chat callchat_completion = client.chat.completions.create( model=model, messages=[{"role": "user", "content": prompt}], temperature=temperature,)# Log the chat completionobserve.log_node_completion( node_id=node_id, output_text=chat_completion.choices[0].message.content, num_input_tokens=chat_completion.usage.prompt_tokens, num_output_tokens=chat_completion.usage.completion_tokens, num_total_tokens=chat_completion.usage.total_tokens, finish_reason=chat_completion.choices[0].finish_reason,)# Log the chain completionobserve.log_node_completion( node_id=chain_id, finish_reason="chain_end", output_text=json.dumps({"output": chat_completion.choices[0].message.content}),)
Logging through our REST APIs
If you are looking to log directly from the client (e.g. using Javascript), you can log directly through our APIs. The Monitor API endpoints can be found on the swagger docs for your environment. More instructions can be found here.