🧠
Galileo & LLMs (Coming Soon!)
Galileo has in built support for 2 key LLM workflows -
- 1.Prompt Engineering Tooling
- 2.Sequence to Sequence Fine Tuning Models
There are 5 Key ingredients of Galileo's Prompt Evaluation tool
- 1.The Prompt Evaluation Method (the core algorithm)
- 1.F1 (between ground truth & model output)
- 2.Prompt Error Potential (PEP)
- 3.Hallucination Score (likelihood of the model hallucinating in the output)
- 4.Edit Distance or Semantic Distance between output and GT
- 5.BLEU / BERT / ROGUE scores
- 2.Automating RAG (Retrieval Augmentation with Context)
- 1.With Generic OOTB Embeddings (V1)
- 2.With Customized Embeddings (V2 + more suitable for Enterprises)
- 3.Galileo Prompting Architecture
- 4.Template Management
- 1.CRUD operations on Templates: Add, delete and edit templates
- 2.Template Registry: Have a registry of the user's templates
- 3.Template Versioning: Prompt templates can be edited by the user, hence tracking version history of prompts becomes important
- 5.Evaluating 2 types of interactions
- 1.Zero shot prompts
- 2.Multi-shot / Chain of Thought Prompting

Last modified 1mo ago