Getting Started

Discover the Console

Upon completing a run, you'll be taken to the Galileo Console. The first thing you'll notice is your image dataset on the right. On each image, we show you the Ground Truth and Prediction annotations (Ground Truth as solid lines, your model's predictions are dotted lines), the image's Data Error Potential, and a count of all Error Types we found in the image. By default, your images are sorted by Data Error Potential.
You can also view your dataset in the embeddings space of the model. This can help you get a semantic understanding of your dataset. Using features like Color-By DEP, you might discover pockets of problematic data (e.g. decision boundaries that might benefit from more samples or a cluster of garbage images).
Your left pane is called the Insights Menu. On the top, you can see your dataset size, a count of your Ground Truth and Predicted objects, and your metric (by default, we show you m[email protected]). Size and mAP update as you add filters to your dataset.
Your main source of insights will be Alerts and Metrics. Alerts are a distilled list of different issues we've identified in your dataset. Insights such as Mislabeled Samples, Class Imbalance, Overlapping Classes, etc will be surfaced as Alerts.
Clicking on an Alert will filter the dataset to the subset of data that corresponds to the Alert.
Under metrics, you'll find different charts, such as:
  • A breakdown of your errors
  • Average Performance by Class
  • Sample Count by Class
  • Overlapping Classes
  • Top Misclassified Pairs
  • DEP Distribution
These charts are dynamic and update as you add different filters. They're also interactive - clicking on a class or group of classes will filter the dataset accordingly, allowing you to inspect and fix the samples.
Inspecting an image
Open an image to see it in greater detail. You can use the controls at the bottom to toggle Ground Truth or Prediction boxes on or off, or use the summarized Errors and Objects list on the right to choose what objects you're viewing. You may also hover over a box to see its DEP score, errors and model's confidence in it, and to fix its annotation.
Taking Action
Once you've identified a problematic subset of data, Galileo allows you to fix your samples with the goal of improving your mAP. In Object Detection runs, we allow you to:
  • Overwrite Ground Truth - Adds all active predictions and removes any active Ground Truths. This is available in the grid view, as well as in the expanded view.
  • Change Label - Re-assign the label of your annotation right in-tool. Only available from the expanded view.
  • Remove Ground Truth - Remove problematic annotation you want to discard from your image
  • Add to Ground Truth - Turn a prediction into a Ground Truth to correct a missing annotation. Only available in the expanded view.
  • Send to Labelers - Send your samples to your labelers through our Labeling Integrations
  • Export - Download your samples so you can fix them elsewhere
Your changes are tracked in your Edits Cart. There you can view a summary of the changes you've made, you can undo them, or download a clean and fixed dataset to retrain your model.
Changing Splits
Your dataset splits are maintained on Galileo. Your data is logged as Training, Test and/or Validation split. Galileo allows you to explore each split independently. Some alerts, such as Underfitting Classes or Overfitting Classes look at cross-split performance. However, for the most part, each split is treated independently.
To switch splits, find the Splits dropdown next to your project and run name near the top of the screen. By default, the Training split is shown first.

Get started with an example notebook

Start integrating Galileo with our supported frameworks