Notebooks

Calculate mean Intersection-Over-Union (mIOU) metric

iPython Project Created 3 months ago Free
Mean Intersection-Over-Union (mIOU) metric
Free Signup

Calculate mean Intersection-Over-Union (mIOU) metric

A ready-to-use script to find mean Intersection-Over-Union metric of class pairs

Input:

  • Existing Project (i.e. "london_roads")
  • At least one pair of classes (i.e. ("cargt", "carlb"))

Output:

  • intersection, union and IoU for each class pair

Imports

In [1]:
import supervisely_lib as sly
import os
import collections
from prettytable import PrettyTable
from tqdm import tqdm

Configuration

Edit the following settings for your own case

In [2]:
# Change this field to the name of your team, where target workspace exists.
team_name = "jupyter_tutorials"
# Change this field to the of your workspace, where target project exists.
workspace_name = "metrics_tutorials"
# Change this field to the name of your target project.
project_name = "tutorial_metric_iou_project"

# Configure the following dictionary  so that is will match pairs of ground truth and predicted classes
# between which IOU will be caluclated.
classes_mapping = {
    "dog": "annotator_dog",
    "person": "annotator_person",    
}

# If you are running this notebook on a Supervisely web instance, the connection
# details below will be filled in from environment variables automatically.
#
# If you are running this notebook locally on your own machine, edit to fill in the
# connection details manually. You can find your access token at
# "Your name on the top right" -> "Account settings" -> "API token".
address = os.environ['SERVER_ADDRESS']
token = os.environ['API_TOKEN']

Script setup

Import nessesary packages and initialize Supervisely API to remotely manage your projects

In [3]:
# Initialize API object
api = sly.Api(address, token)

Verify input values

Test that context (team / workspace / project) exists

In [4]:
team = api.team.get_info_by_name(team_name)
if team is None:
    raise RuntimeError("Team {!r} not found".format(team_name))

workspace = api.workspace.get_info_by_name(team.id, workspace_name)
if workspace is None:
    raise RuntimeError("Workspace {!r} not found".format(workspace_name))
    
project = api.project.get_info_by_name(workspace.id, project_name)
if project is None:
    raise RuntimeError("Project {!r} not found".format(project_name))
    
print("Team: id={}, name={}".format(team.id, team.name))
print("Workspace: id={}, name={}".format(workspace.id, workspace.name))
print("Project: id={}, name={}".format(project.id, project.name))
Out [4]:
Team: id=8, name=dima
Workspace: id=14, name=First Workspace
Project: id=34, name=tutorial_metric_iou_project

Get Project Meta of Source Project

Project Meta contains information about classes and tags# Get source project meta

In [5]:
meta_json = api.project.get_meta(project.id)
meta = sly.ProjectMeta.from_json(meta_json)

# check that all classes exist
project_classes_names = list(classes_mapping.keys()) + list(classes_mapping.values())

for class_name in project_classes_names:
    if class_name not in meta.obj_classes.keys():
        raise RuntimeError("Class {!r} not found in source project {!r}".format(class_name, project.name))

Create metric evaluator

In [6]:
metric_iou = sly.IoUMetric(classes_mapping)

Iterate over all images, and calculate metric by annotations pairs

In [7]:
for dataset in api.dataset.get_list(project.id):
    
    # generate dataset name in destination project if it exists
    print("Processing: project = {!r}, dataset = {!r}".format(project.name, dataset.name), flush=True)
    
    images = api.image.get_list(dataset.id)
    with tqdm(total=len(images), desc="Process annotations") as progress_bar:
        for batch in sly.batched(images):
            image_ids = [image_info.id for image_info in batch]
            ann_infos = api.annotation.download_batch(dataset.id, image_ids)
            
            for ann_info in ann_infos:
                ann = sly.Annotation.from_json(ann_info.annotation, meta)
                # We are using the same annotation on the both side of the metric computation
                # (classes_mapping provides the corresponding classes that we will look for
                # in the annotation), but it is also possible to use different annotations
                # on left and right, e.g. to compare the source hand-labeled project to a
                # neural netork inference result.
                metric_iou.add_pair(ann, ann)
            
            progress_bar.update(len(batch))
Out [7]:
Processing: project = 'tutorial_metric_iou_project', dataset = 'dataset_1'
Process annotations: 100%|██████████| 3/3 [00:00<00:00, 44.99it/s]
Processing: project = 'tutorial_metric_iou_project', dataset = 'dataset_2'
Process annotations: 100%|██████████| 2/2 [00:00<00:00, 65.36it/s]

Print results by default logger

In [8]:
metric_iou.log_total_metrics()
Out [8]:
{"message": "**************** Result IoU metric values ****************", "timestamp": "2019-04-16T16:12:52.648Z", "level": "info"}
{"message": "NOTE! Values for \"intersection\" and \"union\" are in pixels.", "timestamp": "2019-04-16T16:12:52.651Z", "level": "info"}
{"message": "1. Classes dog <-> annotator_dog:   IoU = 0.870172,  mean intersection = 61491.400000, mean union = 70665.800000", "timestamp": "2019-04-16T16:12:52.654Z", "level": "info"}
{"message": "2. Classes person <-> annotator_person:   IoU = 0.448813,  mean intersection = 53688.400000, mean union = 119623.200000", "timestamp": "2019-04-16T16:12:52.656Z", "level": "info"}
{"message": "Total:   IoU = 0.605289,  mean intersection = 575899.000000, mean union = 951445.000000", "timestamp": "2019-04-16T16:12:52.658Z", "level": "info"}

Print results manually

In [9]:
# Metrics for each pair of classes separately.
results = metric_iou.get_metrics()

# Metrics aggregated over all pairs of classes from classes_mapping
total_results = metric_iou.get_total_metrics()

table = PrettyTable(["classes pair", "metrics values"])

def build_values_text(values):
    values_text = ""
    for metrics_name, value in values.items():
        values_text += "{}: {}\n".format(metrics_name, value)
    return values_text
    
for first_pair_class, values in results.items():
    pair_text = "{} <-> {}".format(first_pair_class, classes_mapping[first_pair_class])
    table.add_row([pair_text, build_values_text(values)])

table.add_row(["TOTAL", build_values_text(total_results)])
print(table.get_string())
Out [9]:
+-----------------------------+--------------------------+
|         classes pair        |      metrics values      |
+-----------------------------+--------------------------+
|    dog <-> annotator_dog    |  intersection: 61491.4   |
|                             |      union: 70665.8      |
|                             | iou: 0.8701719926753819  |
|                             |                          |
| person <-> annotator_person |  intersection: 53688.4   |
|                             |     union: 119623.2      |
|                             | iou: 0.44881260491275937 |
|                             |                          |
|            TOTAL            |   intersection: 575899   |
|                             |      union: 951445       |
|                             | iou: 0.6052887975658078  |
|                             |                          |
+-----------------------------+--------------------------+

Done!

More Info

ID
23
First released
3 months ago
Last updated
A month ago

Owner

s