Galileo Guardrail Metrics
Understand Galileo's Guardrail Metrics in LLM Studio
Galileo has built a menu of Guardrail Metrics to help you evaluate and observe your generative AI applications. These metrics are tailored to your use case and are designed to help you ensure your application quality and behavior. The Scorer
definition for each metric is listed immediately below.
Galileo's Guardrail Metrics are a combination of industry-standard metrics and an outcome of Galileo's in-house ML Research Team.
Output Quality Metrics
Correctness
Scorers.factuality
Uncertainty Added by default
BLEU Added by default
ROUGE-1 Added by default
RAG Quality Metrics
Context Adherence Basic:
Scorers.context_adherence_basic
Context Adherence Plus:
Scorers.context_adherence_plus
Completeness Basic:
Scorers.completeness_basic
Completeness Plus:
Scorers.completeness_plus
Chunk Attribution Basic:
Scorers.chunk_attribution_utilization_basic
Chunk Attribution Plus:
Scorers.chunk_attribution_utilization_plus
Chunk Utilization Basic:
Scorers.chunk_attribution_utilization_basic
Chunk Utilization Plus:
Scorers.chunk_attribution_utilization_plus
Context Relevance
Scorers.context_relevance
Input Quality Metrics
Prompt Perplexity
Scorers.prompt_perplexity
Safety Metrics
Input & Output PII
Scorers.pii
Input & Output Tone
Scorers.tone
Input & Output Toxicity
Scorers.toxicity
Input & Output Sexism
Scorers.sexist
Scorers.prompt_injection
Looking for something more specific? You can always add your own custom metric.
Last updated