Galileo
Search
K

Setting Up Your LLMs

Galileo seamlessly integrates with both publicly accessible LLM APIs and your privately hosted models. Before you can start using Galileo Prompt, you need to set up your models on Galileo Prompt.
After logging into your Galileo Console, click on your Profile Avatar on the bottom left and open the Integrations page.
You can set up and manage all your LLM API and Custom Model integrations from the Integrations page.
Note: These integrations are user-specific to ensure that different users in an organization can use their own API keys when interacting with the LLMs.

Public APIs supported

OpenAI

We support both the Chat and Completions APIs from OpenAI, with all of the active models.
Note: OpenAI Models power a few of Galileo's Guardrail Metrics (e.g. Factuality, Groundedness). To improve your evaluation experience, we recommend setting up this integration even if the model you're prompting or testing is a different one.

Azure OpenAI service

If you use OpenAI models through Azure, you can set up your Azure integration.
To calculate the Uncertainty metric, we require havingtext-curie-001 or text-davinci-003models available in your Azure environment. This is required in order to fetch log probabilities. For Galileo's Guardrail metrics that rely on GPT calls (Factuality and Groundedness), we require using 0613 or above versions of gpt-35-turbo (Azure docs).

Google Palm2

Coming soon.

Custom Models

Support for self-hosted open-source or fine-tuned models will be coming very soon.