Endpoint Name:

Enter Inference Class List (Optional for labeling):

Class 0:

Quick Start

ABOUT:

This application consumes an Amazon Sagemaker Object-Detection Inference Endpoint API via a Lambda function. It provides both a visual and text output of an object-detection inference model run against a provided image.

Quick Start:

To use, follow the below steps:

  1. Select the name of the Sagemaker Endpoint in the 'Enter Endpoint' field.
    • The Sagemaker endpoints in the selected region will be displayed, if none available then check the region has ‘InService’ Endpoints to consume.
  2. Enter the Class labels used when training the model into the 'Inference Class List' by clicking 'Add Class' and typing in the ordered class list:
    • This step is optional and is only used for labelling.
    • If you don't know the class labels you can run the inference once and assign classes that make sense for proceeding runs.
  3. Select the local image to process against the Inference Endpoint form the 'Select Inference Image' field.
  4. Select the initial desired Confidence Threshold for inference:
    • This can be updated without reprocessing the inference so don't worry to much about the initial value.
  5. Finally, Click 'Submit':
    • At this point the image will be sent to the Amazon Sagemaker Endpoint for Inference.
    • The return will be processed. All identified objects will be displayed in the Inference Display readout below the image
    • Only identified objects that have a confidence above the Confidence Threshold will be box bounded in the image.
  6. Once the inference has been processed, you can adjust the Confidence Threshold slider to see where the specific detection becomes accepted.