Galileo
Search
K

Package

Subpackages

Submodules

llm_monitor.handlers module

class MonitorHandler(project_name, *args, **kwargs)

Bases: BaseCallbackHandler
LangChain callbackbander for LLM Monitoring
  • Parameters: project_name (str) – Name of the project to log to

timers*: Dict[str, Dict[str, float]]* = {}

records*: Dict[str, TransactionRecord]* = {}

on_llm_start(serialized, prompts, **kwargs)

Run when LLM starts running.
  • Return type: Any

on_chat_model_start(serialized, messages, **kwargs)

Run when Chat Model starts running.
  • Return type: Any

on_llm_end(response, **kwargs)

Run when LLM ends running.
  • Return type: Any

on_llm_error(error, **kwargs)

Run when LLM errors.
  • Return type: Any

llm_monitor.monitor module

class LLMMonitor(project_name, *args, **kwargs)

Bases: object
Initializes LLM Monitor
  • Parameters: project_name (str) – The name of the project to log to

timers*: Dict[str, Dict[str, float]]* = {}

records*: Dict[str, TransactionRecord]* = {}

log_prompt(prompt, model, temperature, parent_trace_id)

Logs the beginning of a LLM request
  • Parameters:
    • prompt (str) – Prompt text as a string
    • model (str) – Name of the model being prompted
    • temperature (Optional*[float]*) – Temperature setting being passed to LLM
    • parent_trace_id (Optional*[str]*) – ID of parent if there is one, e.g. chain
  • Returns: ID of the trace being initiated
  • Return type: str

log_completion(trace_id, output_text, num_input_tokens, num_output_tokens, num_total_tokens, finish_reason=None, status_code=None, user_metadata=None, tags=None)

Logs the completion of the LLM request
  • Parameters:
    • trace_id (str) – ID of the trace started with log_prompt()
    • output_text (str) – Completion text from the LLM response
    • num_input_tokens (int) – Number of input tokens
    • num_output_tokens (int) – Number of output tokens
    • num_total_tokens (int) – Total number of tokens
    • finish_reason (Optional*[str]*) – Finish reason from the LLM
    • status_code (Optional*[int]*) – Status code of the API call to the LLM
    • user_metadta (Optional*[Dict[str,* Any*]**]*) – User-defined metadata as key-value pairs
    • tags (Optional*[List[str]**]*) – User-defined tags as a list of strings
  • Return type: None

log_error(trace_id, error_message, status_code)

Logs an error from an LLM caal
  • Parameters:
    • trace_id (str) – ID of the trace started with log_prompt()
    • error_message (str) – Error message returned from the LLM
    • status_code (Optional*[int]*) – Status code of the API request to the LLM
  • Return type: None

Module contents

LLM Monitor

class MonitorHandler(project_name, *args, **kwargs)

Bases: BaseCallbackHandler
LangChain callbackbander for LLM Monitoring
  • Parameters: project_name (str) – Name of the project to log to

timers*: Dict[str, Dict[str, float]]* = {}

records*: Dict[str, TransactionRecord]* = {}

on_llm_start(serialized, prompts, **kwargs)

Run when LLM starts running.
  • Return type: Any

on_chat_model_start(serialized, messages, **kwargs)

Run when Chat Model starts running.
  • Return type: Any

on_llm_end(response, **kwargs)

Run when LLM ends running.
  • Return type: Any

on_llm_error(error, **kwargs)

Run when LLM errors.
  • Return type: Any

class LLMMonitor(project_name, *args, **kwargs)

Bases: object
Initializes LLM Monitor
  • Parameters: project_name (str) – The name of the project to log to

timers*: Dict[str, Dict[str, float]]* = {}

records*: Dict[str, TransactionRecord]* = {}

log_prompt(prompt, model, temperature, parent_trace_id)

Logs the beginning of a LLM request
  • Parameters:
    • prompt (str) – Prompt text as a string
    • model (str) – Name of the model being prompted
    • temperature (Optional*[float]*) – Temperature setting being passed to LLM
    • parent_trace_id (Optional*[str]*) – ID of parent if there is one, e.g. chain
  • Returns: ID of the trace being initiated
  • Return type: str

log_completion(trace_id, output_text, num_input_tokens, num_output_tokens, num_total_tokens, finish_reason=None, status_code=None, user_metadata=None, tags=None)

Logs the completion of the LLM request
  • Parameters:
    • trace_id (str) – ID of the trace started with log_prompt()
    • output_text (str) – Completion text from the LLM response
    • num_input_tokens (int) – Number of input tokens
    • num_output_tokens (int) – Number of output tokens
    • num_total_tokens (int) – Total number of tokens
    • finish_reason (Optional*[str]*) – Finish reason from the LLM
    • status_code (Optional*[int]*) – Status code of the API call to the LLM
    • user_metadta (Optional*[Dict[str,* Any*]**]*) – User-defined metadata as key-value pairs
    • tags (Optional*[List[str]**]*) – User-defined tags as a list of strings
  • Return type: None

log_error(trace_id, error_message, status_code)

Logs an error from an LLM caal
  • Parameters:
    • trace_id (str) – ID of the trace started with log_prompt()
    • error_message (str) – Error message returned from the LLM
    • status_code (Optional*[int]*) – Status code of the API request to the LLM
  • Return type: None