Package
Bases:
BaseCallbackHandler
LangChain callbackbander for LLM Monitoring
- Parameters: project_name (str) – Name of the project to log to
Run when LLM starts running.
- Return type:
Any
Run when Chat Model starts running.
- Return type:
Any
Run when LLM ends running.
- Return type:
Any
Run when LLM errors.
- Return type:
Any
Bases:
object
Initializes LLM Monitor
- Parameters: project_name (str) – The name of the project to log to
Logs the beginning of a LLM request
- Parameters:
- prompt (str) – Prompt text as a string
- model (str) – Name of the model being prompted
- temperature (Optional*[float]*) – Temperature setting being passed to LLM
- parent_trace_id (Optional*[str]*) – ID of parent if there is one, e.g. chain
- Returns: ID of the trace being initiated
- Return type: str
Logs the completion of the LLM request
- Parameters:
- trace_id (str) – ID of the trace started with log_prompt()
- output_text (str) – Completion text from the LLM response
- num_input_tokens (int) – Number of input tokens
- num_output_tokens (int) – Number of output tokens
- num_total_tokens (int) – Total number of tokens
- finish_reason (Optional*[str]*) – Finish reason from the LLM
- status_code (Optional*[int]*) – Status code of the API call to the LLM
- user_metadta (Optional*[Dict[str,* Any*]**]*) – User-defined metadata as key-value pairs
- tags (Optional*[List[str]**]*) – User-defined tags as a list of strings
- Return type:
None
Logs an error from an LLM caal
- Parameters:
- trace_id (str) – ID of the trace started with log_prompt()
- error_message (str) – Error message returned from the LLM
- status_code (Optional*[int]*) – Status code of the API request to the LLM
- Return type:
None
LLM Monitor
Bases:
BaseCallbackHandler
LangChain callbackbander for LLM Monitoring
- Parameters: project_name (str) – Name of the project to log to
Run when LLM starts running.
- Return type:
Any
Run when Chat Model starts running.
- Return type:
Any
Run when LLM ends running.
- Return type:
Any
Run when LLM errors.
- Return type:
Any
Bases:
object
Initializes LLM Monitor
- Parameters: project_name (str) – The name of the project to log to
Logs the beginning of a LLM request
- Parameters:
- prompt (str) – Prompt text as a string
- model (str) – Name of the model being prompted
- temperature (Optional*[float]*) – Temperature setting being passed to LLM
- parent_trace_id (Optional*[str]*) – ID of parent if there is one, e.g. chain
- Returns: ID of the trace being initiated
- Return type: str
Logs the completion of the LLM request
- Parameters:
- trace_id (str) – ID of the trace started with log_prompt()
- output_text (str) – Completion text from the LLM response
- num_input_tokens (int) – Number of input tokens
- num_output_tokens (int) – Number of output tokens
- num_total_tokens (int) – Total number of tokens
- finish_reason (Optional*[str]*) – Finish reason from the LLM
- status_code (Optional*[int]*) – Status code of the API call to the LLM
- user_metadta (Optional*[Dict[str,* Any*]**]*) – User-defined metadata as key-value pairs
- tags (Optional*[List[str]**]*) – User-defined tags as a list of strings
- Return type:
None
Logs an error from an LLM caal
- Parameters:
- trace_id (str) – ID of the trace started with log_prompt()
- error_message (str) – Error message returned from the LLM
- status_code (Optional*[int]*) – Status code of the API request to the LLM
- Return type:
None
Last modified 1mo ago