Galileo integrates with publicly accessible LLM APIs as well as Open Source LLMs (privately hosted). Before you start using Evaluate on your own LLMs, you need to set up your models on the system.
Go to the Galileo Home Page
Click on your Profile (bottom left)
Click on Integrations
You can set up and manage all your LLM API and Custom Model integrations from the Integrations page.
Note: These integrations are user-specific to ensure that different users in an organization can use their own API keys when interacting with the LLMs.
Public APIs supported
We support both the Chat and Completions APIs from OpenAI, with all of the active models. This can be set up from the Galileo console or from the Python client.
Note: OpenAI Models power a few of Galileo's Guardrail Metrics (e.g. Correctness, Context Adherence, Chunk Attribution, Chunk Utilization, Completeness). To improve your evaluation experience, we recommend setting up this integration even if the model you're prompting or testing is a different one.
If you use OpenAI models through Azure, you can set up your Azure integration. This can be set up from the Galileo console or from the Python client.
To calculate the Uncertainty metric, we require havingtext-curie-001 or text-davinci-003models available in your Azure environment. This is required in order to fetch log probabilities. For Galileo's Guardrail metrics that rely on GPT calls (Factuality and Groundedness), we require using 0613 or above versions of gpt-35-turbo (Azure docs).