tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector#

Collect metrics for bounding box predictions.

Module Contents#

Classes#

Class

Description

COCOAnnotation

A single ground truth annotation in the COCO format.

COCOGroundTruth

A single ground truth annotation in the COCO format.

COCOPrediction

A single prediction in the COCO results format.

BoundingBoxMetricsCollector

Compute metrics for bounding box predictions.

Functions#

Function

Description

compute_iou

Calculates intersection over union for 2 bounding boxes in XYWH format

API#

class tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOAnnotation#

Bases: typing.TypedDict

A single ground truth annotation in the COCO format.

Corresponds to the official COCO annotation format: https://cocodataset.org/#format-data

Initialize self. See help(type(self)) for accurate signature.

category_id: int = None#

The category ID of the annotation. This is the index of the class in the classes list.

score: float | None = None#

The confidence score of the annotation, if the annotation comes from a model prediction.

bbox: list[float] = None#

The bounding box of the annotation in XYWH format.

image_id: int | None = None#

The ID of the image that the annotation belongs to.

class tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOGroundTruth#

Bases: typing.TypedDict

A single ground truth annotation in the COCO format.

Corresponds to the official COCO annotation format: https://cocodataset.org/#format-data

Initialize self. See help(type(self)) for accurate signature.

image_id: int | None = None#

The ID of the image that the annotation belongs to.

annotations: list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOAnnotation] = None#

The list of ground truth annotations in the image.

height: int | None = None#

The height of the image.

width: int | None = None#

The width of the image.

class tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOPrediction#

Bases: typing.TypedDict

A single prediction in the COCO results format.

Initialize self. See help(type(self)) for accurate signature.

annotations: list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOAnnotation] = None#

The list of predicted annotations in the image.

class tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.BoundingBoxMetricsCollector(classes: list[str], label_mapping: dict[int, int], iou_threshold: float = 0.5, compute_derived_metrics: bool = False, derived_metrics_mode: str = 'relaxed', extra_metrics_fn: Callable[[list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOGroundTruth], list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOPrediction], dict[str, list[Any]] | None], None] | None = None, preprocess_fn: Callable[[tlc.core.builtins.types.SampleData, tlc.client.torch.metrics.predictor.PredictorOutput], tuple[list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOGroundTruth], list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOPrediction]]] | None = None, compute_aggregates: bool = True)#

Bases: tlc.client.torch.metrics.metrics_collectors.metrics_collector_base.MetricsCollector

Compute metrics for bounding box predictions.

By default, this metrics collector only collects the predicted bounding boxes and per-bounding box metrics (label, iou, confidence).

If compute_derived_metrics is True, the additional metrics tp, fp, and fn will be computed according to the derived_metrics_mode flag. If this flag is set to “strict”, then only one predicted bounding box can match each ground truth bounding box. If this flag is set to “relaxed”, then multiple predicted bounding boxes can match each ground truth bounding box. When multiple boxes match the same ground truth box, only one true positive is counted.

For working with different sample/prediction formats, the preprocess_fn argument can be used to provide a custom preprocessing function. This function should take a batch of samples and predictions and return a tuple of lists of COCOGroundTruth and COCOPrediction respectively.

For computing additional metrics, the extra_metrics_fn argument can be provided to add additional metrics or update already collected metrics.

Parameters:
  • model – The model to be used in the prediction pass.

  • classes – A list of class names.

  • label_mapping – A dictionary mapping class indices to the range [0, num_classes). Class indices in the source dataset could be in any range, so this mapping is used to convert them to the range [0, num_classes), which is usually used in object detection models.

  • iou_threshold – The IoU threshold to use for matching predictions to ground truths.

  • compute_derived_metrics – Whether to compute derived metrics.

  • derived_metrics_mode – The mode to use when computing derived metrics. Must be one of “strict” or “relaxed”.

  • extra_metrics_fn – A function that takes a batch of samples, a batch of predictions, and a dictionary of computed metrics. This function can add additional metrics to the metrics collected by the metrics collector, modify existing metrics, or delete metrics. Any such changes should be accompanied by a schema override, see add_schema(), update_schema(), and delete_schema() for details.

  • preprocess_fn – A function that takes a batch of samples and a batch of predictions and returns modified lists in a standard format, compatible with this metrics collector.

Create a new metrics collector.

Parameters:
  • preprocess_fn – A function that pre-processes the batch and predictor output before computing the metrics.

  • compute_aggregates – Whether to compute aggregates for the metrics.

compute_metrics(batch: tlc.core.builtins.types.SampleData, predictor_output: tlc.client.torch.metrics.predictor.PredictorOutput) dict[str, tlc.core.builtins.types.MetricData]#

Compute metrics for bounding box predictions.

Parameters:
  • batch – A batch of samples.

  • predictions – A batch of predictions.

Returns:

A dictionary of mapping metric names to metric values for a batch of inputs.

static check_schema_compatibility(metrics: dict[str, list[Any]], column_schemas: dict[str, tlc.core.schema.Schema]) None#

Check that the metrics are compatible with the column schemas.

preprocess(batch: tlc.core.builtins.types.SampleData, predictor_output: tlc.client.torch.metrics.predictor.PredictorOutput) tuple[list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOGroundTruth], list[tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.COCOPrediction]]#

Default preprocessor for the raw batch and predictor output.

This preprocessor recognizes and transforms detectron2 samples/predictions, otherwise it assumes that the samples/predictions are already in the COCO format.

Parameters:
  • batch – A batch of samples.

  • predictor_output – A batch of predictions.

Returns:

A tuple containing the preprocessed batch and predictions.

property column_schemas: dict[str, tlc.core.schema.Schema]#
add_schema(key: str, schema: tlc.core.schema.Schema) None#

Add a schema.

When adding new values to the metrics computed by this metrics collector, the schemas for the new values should also be added.

Example:

# assuming `extra_metrics_fn` adds a new top-level metric called "my_metric", and adds a new per-
# bounding box metric called "bb_area".

bbox_metrics_collector.add_schema("my_metric", Schema(value=Int32Value()))
bbox_metrics_collector.add_schema("bbs_predicted.bb_list.bb_area", Schema(value=Float32Value()))
Parameters:
  • key – The key of the schema to add. Nested schemas can be added by using the dot notation.

  • schema – The schema to add, may be nested.

update_schema(key: str, schema: tlc.core.schema.Schema) None#

Update a schema.

When updating metrics-values computed by this metrics collector, the schemas for the updated values should also be updated.

Example:

# assuming `extra_metrics_fn` modifies the top-level metric `TRUE_POSITIVE`, and modifies the per-
# bounding box metric `iou`.

bbox_metrics_collector.update_schema(
    "tp",
    Schema(value=Float32Value(), description="This metric used to be an int, but now it is a float."),
)
bbox_metrics_collector.update_schema(
    "bbs_predicted.bb_list.iou",
    Schema(value=Float32Value(), description="I have changed the description of this metric."),
)
Parameters:
  • key – The key of the schema to modify. Nested schemas can be modified by using the dot notation.

  • schema – The schema to add, may be nested.

delete_schema(key: str) None#

Delete a schema.

When deleting metrics-values computed by this metrics collector, the schemas for the deleted values should also be deleted.

Example:

# assuming `extra_metrics_fn` deletes the top-level metric "FALSE_NEGATIVE", and deletes the per-
# bounding box metric "iou".

bbox_metrics_collector.delete_schema("FALSE_NEGATIVE")
bbox_metrics_collector.delete_schema("bboxes.bounding_boxes.iou")
Parameters:

key – The key of the schema to delete. Nested schemas can be deleted by using the dot notation.

tlc.client.torch.metrics.metrics_collectors.bounding_box_metrics_collector.compute_iou(bb1: list[float], bb2: list[float]) float#

Calculates intersection over union for 2 bounding boxes in XYWH format