Notebooks

Guide #04: neural network inference

iPython Project Created 3 months ago Free
How to deploy neural network and apply it to images in different scenarios
Free Signup

Supervisely Tutorial #4

Neural networks: deploy and inference with Supervisely online API

In this tutorial we will show how to deploy a neural network model for online inference and perform inference requests using Supervisely online API from our SDK.

Setup steps

Before we can start issuing inference requests, we need to connect to the Supervisely web instance, make sure the model we need is available and set up a worker machine to load the model on.

Necessary imports

Simply import the Supervisrly Python SDK module:

In [1]:
import supervisely_lib as sly

Just for illustrations in this tutorial, a helper to render labeled objects on images:

In [2]:
# PyPlot for drawing images in Jupyter.
%matplotlib inline
import matplotlib.pyplot as plt

def draw_labeled_image(img, ann):
    canvas_draw_contour = img.copy()
    ann.draw_contour(canvas_draw_contour, thickness=7)
    fig = plt.figure(figsize=(30, 30))
    fig.add_subplot(1, 2, 1)
    plt.imshow(img)
    fig.add_subplot(1, 2, 2)
    plt.imshow(canvas_draw_contour)    
    plt.show() 

Initialize API access with your credentials

Before starting to interact with a Supervisely web instance using our API, you need to supply your use credentials: your unique access token that you can find under your profile details:

In [3]:
import os

# Jupyter notebooks hosted on Supervisely can get their user's
# credentials from the environment varibales.
# If you are running the notebook outside of Supervisely, plug
# the server address and your API token here.
# You can find your API token in the account settings:
# -> click your name in the top-right corner
# -> select "account settings"
# -> select "API token" tab on top.
address = os.environ['SERVER_ADDRESS']
token = os.environ['API_TOKEN']

print("Server address: ", address)
print("Your API token: ", token)

# Initialize the API access object.
api = sly.Api(address, token)
Out [3]:
Server address:  http://192.168.1.69:5555
Your API token:  OfaV5z24gEQ7ikv2DiVdYu1CXZhMavU7POtJw2iDtQtvGUux31DUyWTXW6mZ0wd3IRuXTNtMFS9pCggewQWRcqSTUi4EJXzly8kH7MJL1hm3uZeM2MCn5HaoEYwXejKT

Define the active workspace

In Supervisely, every neural network model (and also every data project) is stored in a context of a certain workspace. See our tutorial #2 for a detailed guide on how to work with workspaces using our online API.

Here we will create a new workspace to avoid interfering with any existing work.

In [4]:
# In Supervisely, a user can belong to multiple teams.
# Everyone has a default team with just their user in it.
# We will work in the context of that default team.
team = api.team.get_list()[0]

# Set up the name of a new workspace to be created.
workspace_name = "api_inference_tutorial"

# Just in case there is already a workspace with this name,
# we can ask the web instance for a new unique name to use.
if api.workspace.exists(team.id, workspace_name):
    workspace = api.workspace.get_info_by_name(team.id, workspace_name)
else:
    workspace = api.workspace.create(team.id, workspace_name)

# Print out the results.
# Here we will see which name our workspace ended up with.
print("Team: id={}, name={}".format(team.id, team.name))
print("Workspace: id={}, name={}".format(workspace.id, workspace.name))
Out [4]:
Team: id=9, name=max
Workspace: id=115, name=api_inference_tutorial

Add the neural network model to the workspace

Now that we have an empty workspace, we need to add a neural network model to it. Here we will clone one of the existing publically avaliable in Supervisely models.

In [5]:
# Set the destination model name within our workspace
model_name = "yolo_coco"

# Grab a unique name in case the one we chose initially is busy.
if api.model.exists(workspace.id, model_name):
    model_name = api.model.get_free_name(workspace.id, model_name)

# Request the model to be copied from our public repository.
# This kicks off an asynchronous task.
task_id = api.model.clone_from_explore('Supervisely/Model Zoo/YOLO v3 (COCO)', workspace.id, model_name)

# Wait for the copying to complete.
api.task.wait(task_id, api.task.Status.FINISHED)

# Query the metadata for the copied model.
model = api.model.get_info_by_name(workspace.id, model_name)
print("Model: id = {}, name = {!r}".format(model.id, model.name))
Out [5]:
Model: id = 361, name = 'yolo_coco'

Select the agent to use

Neural network inference is a computationally intensive process, so it is infeasible to have the inference run on the same machine that serves the Supervisely web instance. Instead, you need to connect a worker machine (with a GPU) to the web instance to run the computations. The worker is connected using the Supervisely Agent - an open-source daemon that runs on the worker, connects to the web instance and listens for tasks to execute. See https://github.com/supervisely/supervisely/tree/master/agent for details on how to run the agent.

From now on the tutorial assumes that you have launched the agent on your worker machine and it shows up on your "Cluster" page in the Supervisely web instance. We first query the instance for the agent ID by name.

In [7]:
# Replace this with your agent name. You can find the list of
# all your agents in the "Cluster" menu in the Supervisely instance.
agent_name = "agent_01"

agent = api.agent.get_info_by_name(team.id, agent_name)
if agent is None:
    raise RuntimeError("Agent {!r} not found".format(agent_name))
if agent.status is api.agent.Status.WAITING:
    raise RuntimeError("Agent {!r} is not running".format(agent_name))

Online on-demand inference

We have all the pre-requisites in place, time to get started with model inference.

Deploy the model to the agent for on-demand inference

The first step is to deploy the model to the agent. Deployment involves:

  • copying the model weights and configuration to the agent,
  • launching a Docker container with the model code that loads the weights onto the worker GPU and starts waiting for inference requests.
In [8]:
# Just in case that the model has been already deployed
# (maybe you are re-running some of this tutorial several times)
# we want to reuse the already deployed version.
#
# Query the web instance for already deployed instances of our model.
task_ids = api.model.get_deploy_tasks(model.id)

# Deploy if necessary.
if len(task_ids) == 0:
    print('Model {!r} is not deployed. Deploying...'.format(model.name))
    task_id = api.task.deploy_model(agent.id, model.id)
    # deploy_model() kicks off an asynchronous task that may take
    # quite a long time - after all, the agent on the worker needs to
    # * Download the model weights from web instance.
    # * Pull the docker image with the model code.
    # * Launch a docker image and wait for it to load the weights onto the GPU.
    #
    # Since we don't have other tasks to process, simply wait
    # for deployment to finish.
    api.task.wait(task_id, api.task.Status.DEPLOYED)
else:
    print('Model {!r} has been already deployed'.format(model.name))
    task_id = task_ids[0]

print('Deploy task_id = {}'.format(task_id))
Out [8]:
Model 'yolo_coco' is not deployed. Deploying...
Deploy task_id = 2203

Get the metadata for the deployed model

Every neural network model is trained to predict a specific set of classes. This set of classes is stored in the model config, and the code loading the mode also parses that config file.

Once the model has been deployed, we can ask it for the set of classes it can predict. The result is a serialized metadata, which can be conveniently parsed into a ProjectMeta object from our Python SDK. See our tutorial #1 for a detailed guide on how to work with metadata using the SDK.

In [9]:
meta_json = api.model.get_output_meta(model.id)
model_meta = sly.ProjectMeta.from_json(meta_json)
print(model_meta)
Out [9]:
ProjectMeta:
Object Classes
+----------------------+-----------+-----------------+
|         Name         |   Shape   |      Color      |
+----------------------+-----------+-----------------+
|     person_model     | Rectangle | [146, 208, 134] |
|    bicycle_model     | Rectangle | [116, 127, 233] |
|      car_model       | Rectangle | [233, 189, 207] |
|   motorbike_model    | Rectangle | [111, 190, 245] |
|   aeroplane_model    | Rectangle |  [92, 126, 104] |
|      bus_model       | Rectangle | [212, 239, 134] |
|     train_model      | Rectangle | [140, 180, 183] |
|     truck_model      | Rectangle | [231, 222, 180] |
|      boat_model      | Rectangle |  [213, 86, 211] |
| traffic light_model  | Rectangle | [137, 206, 104] |
|  fire hydrant_model  | Rectangle | [194, 160, 183] |
|   stop sign_model    | Rectangle | [131, 156, 191] |
| parking meter_model  | Rectangle |  [96, 163, 96]  |
|     bench_model      | Rectangle | [232, 202, 225] |
|      bird_model      | Rectangle | [253, 192, 185] |
|      cat_model       | Rectangle | [109, 250, 167] |
|      dog_model       | Rectangle | [214, 227, 223] |
|     horse_model      | Rectangle | [215, 164, 135] |
|     sheep_model      | Rectangle | [208, 112, 181] |
|      cow_model       | Rectangle | [100, 211, 137] |
|    elephant_model    | Rectangle | [178, 189, 166] |
|      bear_model      | Rectangle | [117, 129, 129] |
|     zebra_model      | Rectangle | [160, 207, 150] |
|    giraffe_model     | Rectangle |  [91, 155, 186] |
|    backpack_model    | Rectangle | [228, 217, 157] |
|    umbrella_model    | Rectangle | [136, 169, 229] |
|    handbag_model     | Rectangle | [100, 181, 251] |
|      tie_model       | Rectangle |  [95, 201, 229] |
|    suitcase_model    | Rectangle | [182, 227, 200] |
|    frisbee_model     | Rectangle |  [102, 168, 94] |
|      skis_model      | Rectangle |  [116, 166, 87] |
|   snowboard_model    | Rectangle | [231, 152, 160] |
|  sports ball_model   | Rectangle | [253, 239, 246] |
|      kite_model      | Rectangle | [107, 158, 211] |
|  baseball bat_model  | Rectangle | [123, 100, 233] |
| baseball glove_model | Rectangle | [225, 126, 184] |
|   skateboard_model   | Rectangle | [216, 171, 174] |
|   surfboard_model    | Rectangle | [144, 216, 188] |
| tennis racket_model  | Rectangle | [182, 156, 250] |
|     bottle_model     | Rectangle | [230, 209, 159] |
|   wine glass_model   | Rectangle |  [183, 254, 98] |
|      cup_model       | Rectangle | [215, 243, 120] |
|      fork_model      | Rectangle | [148, 247, 126] |
|     knife_model      | Rectangle | [175, 100, 183] |
|     spoon_model      | Rectangle | [245, 171, 198] |
|      bowl_model      | Rectangle |  [96, 216, 100] |
|     banana_model     | Rectangle | [123, 135, 104] |
|     apple_model      | Rectangle | [209, 147, 152] |
|    sandwich_model    | Rectangle | [211, 209, 131] |
|     orange_model     | Rectangle | [115, 132, 226] |
|    broccoli_model    | Rectangle | [108, 234, 113] |
|     carrot_model     | Rectangle | [136, 121, 238] |
|    hot dog_model     | Rectangle |  [101, 87, 230] |
|     pizza_model      | Rectangle | [128, 233, 240] |
|     donut_model      | Rectangle | [217, 254, 187] |
|      cake_model      | Rectangle | [118, 198, 160] |
|     chair_model      | Rectangle |  [213, 96, 120] |
|      sofa_model      | Rectangle | [240, 145, 177] |
|  pottedplant_model   | Rectangle | [238, 211, 241] |
|      bed_model       | Rectangle | [186, 198, 157] |
|  diningtable_model   | Rectangle | [200, 219, 127] |
|     toilet_model     | Rectangle | [175, 247, 104] |
|   tvmonitor_model    | Rectangle | [121, 243, 189] |
|     laptop_model     | Rectangle | [126, 239, 127] |
|     mouse_model      | Rectangle | [171, 138, 156] |
|     remote_model     | Rectangle | [251, 104, 192] |
|    keyboard_model    | Rectangle | [128, 202, 223] |
|   cell phone_model   | Rectangle | [108, 201, 122] |
|   microwave_model    | Rectangle | [248, 218, 143] |
|      oven_model      | Rectangle | [178, 158, 127] |
|    toaster_model     | Rectangle |  [120, 119, 97] |
|      sink_model      | Rectangle | [216, 216, 127] |
|  refrigerator_model  | Rectangle |  [94, 129, 108] |
|      book_model      | Rectangle | [178, 127, 145] |
|     clock_model      | Rectangle |  [147, 86, 212] |
|      vase_model      | Rectangle | [136, 159, 104] |
|    scissors_model    | Rectangle | [183, 114, 216] |
|   teddy bear_model   | Rectangle |  [99, 174, 203] |
|   hair drier_model   | Rectangle | [148, 189, 224] |
|   toothbrush_model   | Rectangle | [164, 225, 168] |
+----------------------+-----------+-----------------+
Image Tags
+------+------------+-----------------+
| Name | Value type | Possible values |
+------+------------+-----------------+
+------+------------+-----------------+
Object Tags
+------------+------------+-----------------+
|    Name    | Value type | Possible values |
+------------+------------+-----------------+
| confidence | any_number |       None      |
+------------+------------+-----------------+

Inference with a locally stored image

We can finally start with inference requests. First example shows how to deal with an image loaded into local memory as a Numpy array. The inference result is a serialized image Annotation, another fundamental class from our SDK that stores image labeling data. See our tutorial #1 for a detailed look at image annotations.

In [10]:
img = sly.image.read('./image_01.jpeg')

# Make an inference request, get a JSON serialized image annotation.
ann_json = api.model.inference(model.id, img)

# Deserialize the annotation using the model meta information that
# we received previously.
ann = sly.Annotation.from_json(ann_json, model_meta)

# Render the inference results.
draw_labeled_image(img, ann)
Out [10]: