Integrating 3LC with SuperGradients¶

This document describes how to integrate 3LC in projects using SuperGradients, an open-source training library for computer vision models and the home of Yolo-NAS. 3LC provides several classes and methods to make it easy to integrate 3LC with your existing SuperGradients projects.

Note

In order to use the SuperGradients integration, the super-gradients Python package must be installed in your environment. Use pip install super-gradients or equivalent.

At the time of writing, the latest release of super-gradients declares a dependency on termcolor==1.1.0, but this conflicts with termcolor>=2.2.0 declared in 3lc. A dependency resolution which respects the declared requirements of 3lc and super-gradients is therefore not possible.

In practice, it turns out that super-gradients can be used with a higher version of termcolor. We therefore recommend to install super-gradients first, and then 3lc, in separate invocations of pip install or equivalent.

pip install super-gradients
pip install 3lc

This installs the requirements of super-gradients, and upgrades any package where 3lc has a higher required version such as termcolor.

Note

The integration is only tested on Python 3.9 and 3.10, due to unresolved dependency conflicts that prevent installing super-gradients on other Python versions. We recommend using Python 3.10 for running SuperGradients.

Registering Datasets¶

In order to make a dataset compatible with 3LC and SuperGradients training code, first create a 3LC Table for each split of your dataset. Then provide your Table to a SuperGradients integration Dataset which lets you use the 3LC Table with SuperGradients training.

For Object Detection, use DetectionDataset, which offers SuperGradients’ detection dataset functionality and loads data directly from a 3LC Table. To create a Table compatible with SuperGradients object detection training, use a method such as Table.from_yolo, Table.from_coco or Table.from_yolo_ndjson.

import tlc
from tlc.integration.super_gradients import DetectionDataset

table = tlc.Table.from_yolo(dataset_yaml_file="path/to/dataset.yaml", split="train")

dataset = DetectionDataset(
    table=table,
    input_dim=(640, 640),
    transforms=[],
)

Training a Model and Collecting Metrics¶

When using the Trainer abstraction in SuperGradients, provide a 3LC metrics collection callback for the task you are working on. The callbacks log the aggregate metrics returned by the SuperGradients Trainer, and invokes per-sample metrics collection at the end of training by default, for both the train and validation Tables.

For Object Detection, use the DetectionMetricsCollectionCallback, which extracts predicted bounding boxes, confidence scores, and class labels from SuperGradients detection predictions and stores them in a 3LC Metrics Table associated with the Run, which can be opened in the 3LC Dashboard.

from super_gradients.training import Trainer

from tlc.integration.super_gradients import DetectionMetricsCollectionCallback, PipelineParams

trainer = Trainer(experiment_name="my_supergradients_experiment")

pipeline_params = PipelineParams(conf=0.1, fp16=True)

metrics_collection_callback = DetectionMetricsCollectionCallback(
    project_name="my_supergradients_project",
    batch_size=32,
    pipeline_params=pipeline_params,
)

training_params = {
    ...: ...,
    "phase_callbacks": [metrics_collection_callback, ...],
}

trainer.train(
    model=...,
    training_params=training_params,
    train_loader=...,
    valid_loader=...,
)

For metrics collection, a SuperGradients Pipeline is created to run inference over all the images of a Table. To provide arguments to this pipeline, provide an instance of PipelineParams to the MetricsCollectionCallback .

See the parameters of the base MetricsCollectionCallback for more ways of customizing the integration.

Note

The callbacks will reuse any existing Run in the session. If no active Run exists, a new Run is created with the provided project_name and run_name. If project_name or run_name is provided and different from those of an existing active Run, a ValueError is raised.

Custom metrics collection¶

To collect additional metrics beyond the model predictions, subclass the task specific MetricsCollectionCallback and override the methods compute_metrics and metrics_column_schemas. Ensure the parent methods are called to retain their functionality.

The following example demonstrates how to extend metrics collection by adding a custom column to the metrics output. In this case, we add a column that records the number of predicted bounding boxes per image. Although this specific metric can already be computed in the 3LC Dashboard, the example is intended to illustrate the recommended approach for customizing metrics collection in your own callbacks.

import tlc

from super_gradients.training.utils.predict.prediction_results import (
    ImageDetectionPrediction,
    ImagesDetectionPrediction,
)
from tlc.integration.super_gradients import DetectionMetricsCollectionCallback

from typing import Any

class CustomDetectionMetricsCollectionCallback(DetectionMetricsCollectionCallback):
    def compute_metrics(
        self,
        images: list[str],
        predictions: ImagesDetectionPrediction | ImageDetectionPrediction,
        table: tlc.Table,
    ) -> dict[str, Any]:
        metrics = super().compute_metrics(images, predictions, table)

        if isinstance(predictions, ImagesDetectionPrediction):
            metrics["num_predicted_boxes"] = [len(predicted_boxes) for predicted_boxes in predictions]
        else:
            metrics["num_predicted_boxes"] = [len(predictions)]

        return metrics