Galileo
Search
K

Semantic Segmentation

Using Galileo for Semantic Segmentation you can improve your models by improving the quality of your training data.
During Training and Pre-Training, Galileo for Semantic Segmentation helps you to identify and fix your annotations quickly. Through Insights such as Advanced Error Detection, Likely Mislabeled, Class Overlap, Data Error Potential and others, you can see what's wrong with your data in matter of seconds, instead of hours.
Once errors are identified, Galileo allows you to take action in-tool or helps you to export erroneous samples to your labeling tool or python environments. Fixing erroneous training data consistently leads to significant improvements in your model quality in production.
Data Error Potential
Data Error Potential (DEP) in Semantic Segmentation surfaces pixels, polygons, and/or images that are 'pulling' your model down. Galileo provides a heatmap to allow user to focus on areas that could be in error.
Advanced Error Types
Galileo uses Advanced Error Detection to bucket meaningful differences between your input data and your model's prediction into one of four error type categories: Background Confusion Errors, Missed Predictions Errors, Boundary Approximation Errors, and Classification Errors.
Filtering by these Categories and DEP is a powerful way to find and fix annotator mistakes as well as examine model failures as a whole.

Get started with an example notebook

Coming Soon.
Example:
local_path_to_dataset_root = '/Users/user/segmentation_datasets/Segmentation_Data'
imgs_remote_location = 'https://storage.googleapis.com/galileo-public-data/CV_datasets/Segmentation_Data'
transforms = transforms.Compose([transfroms.Resize((512, 512)
train_dataset = ADE20k(transforms=transforms, train=True)
val_dataset = ADE20k(transforms=transforms, train=False)
train_dataloader = torch.utils.DataLoader(train_dataset)
val_dataloader = torch.utils.DataLoader(val_dataset)
# background label is the 0th logit, plane is the 1st, etc.
labels = ["Background", "Plane", "Ship"]
dq.set_labels_for_run(labels)
model = UNet()
watch(
model,
imgs_remote_location=imgs_remote_location
local_path_to_dataset_root=local_path_to_dataset_root,
dataloaders={"training": train_dataloader,
"validation": val_dataloader},
)
# train your model
for epoch in range(epochs):
# train
dq.finish()
Documentation of the watch function can be found here.
Important Note: Dataloaders provided should have no cropping transforms applied to images, only resizing and color augmentations are allowed. Dataloaders provided do not have to be the same as used in training as we recognized cropping can be integral to training, if you use cropping during training please provide separate dataloaders here that do not use cropping.

Get started with an example notebook
📘