Deploying an AI Model to Great Bear

This guide gives an example of how to configure the Wedge app to run an AI model on a node using the App Control UI. The guide assumes that you have already deployed Wedge to a site, and have access to the App Control UI. For more information about how to deploy Wedge, see Deployment.

You are going to:

  • run the MobileNet image classification model on a public RTSP stream,
  • use preprocessing to normalize the images extracted from the input video,
  • apply postprocessing to the model output to select the most likely class, and
  • write the resulting inferences to a public MQTT broker.
  • Finally, you connect to the public MQTT broker and subscribe to the real-time inferences.

Steps

  1. Open the App Control UI in your browser:

    https://app-control.greatbear.io

    and then select a target node from the available Wedge Sites.

  2. First you need to provide the URL of an RTSP stream that the Wedge application should connect to. The stream should be accessible by the node at the network level, such as a local Meraki camera feed. In this example you can use a publically available RTSP stream to test your model deployment. Enter the following URL into the Input field:

    rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4

    Since this is a public RTSP video stream provided for testing purposes, it sometimes becomes unavailable. You can verify that it is online using a tool like VLC.

  3. Next, you can provide the URL of an MQTT broker and topic where the Wedge application should publish inferences to. In this example you use the publically available MQTT broker that is provided by HiveMQ. Enter the following URL into the Output field (replacing [unique_topic_name] with a suitable topic):

    mqtt://[unique_topic_name]@broker.hivemq.com:1883

    Please choose and remember a unique topic name for this demo deployment, since the public MQTT broker can have other users connecting to it and publishing messages.

  4. After defining the input and output configuration for the Wedge app, specify the URL of a model that should be downloaded and ran. Wedge can download ONNX and TFLite models from direct links, such as to a model hosted on GitHub, or from an AWS or MinIO S3 bucket. In this example you can use the publically available MobileNet ONNX model, so enter the following in the Model field:

    https://github.com/onnx/models/blob/main/vision/classification/mobilenet/model/mobilenetv2-7.onnx

  5. As the MobileNet model expects input images to be normalized in a standard way, select Preprocess and define the following values:

    • Resizing Height & Width is an important preprocessing step in computer vision and models are often trained with images of a specific size. In order to make the most of your model’s capabilities choose the height and width that match the dimension of the training dataset. For the MobileNet model this is 224 and 224 for both height and width.

    • Mean Normalization is an image preprocessing technique used to standardize RGB data around mean values. It allows you to have different sources of data inside the same range, and, as a result, enhance the performance of model inference. For the MobileNet model the expected mean normalization input is 0.485, 0.456, 0.406

    • Standard Deviation is another technique that allows you to standardize data. The values of the data are centered around the mean with a unit standard deviation. It is a number that describes how spread out the values are. For the MobileNet model the expected standard deviation input is 0.229, 0.224, 0.225

  6. As the MobileNet model outputs scores for each of the 1000 classes of ImageNet, we can apply standard postprocessing to obtain an image classification. Select the Postprocess to calculate the softmax probability scores for each class and then match the most probable to a list of class names. You can specify the class names file to be used in the Classes field. In this example, use:

    https://github.com/onnx/models/blob/main/vision/classification/synset.txt

  7. Once all the values above have been set, your configuration setup should look like the following: Final view of the browser page Final view of the browser page

    In this example, the model and class files are public and can be downloaded from GitHub, however Wedge can also download files from AWS and MinIO S3 buckets. To use this approach, use the following format in the Input and Classname fields: https://[access_key]:[secret_key]]@[host]:[port]/[bucket]/[file]

  8. Finally, select a node on which you wish you deploy your model, and click Update Content. You will see a success message indicating that the node has received the parameters, and in the list view the node’s configuration will have been updated.

  9. To view the results of your deployed model’s predictions, you can:

    • Use the publically available MQTT Client from HiveMQ. To do so, complete the following steps:

      1. Navigate to the MQTT Client in your browser.

      2. Select Connect, then Add New Topic Subscription.

      3. Enter the same [unique_topic_name] that you used when deploying the MobileNet model to the node, then click Subscribe.

        If everything is setup correctly you should see prediction messages appearing in the browser.

        Node new configurations Node new configurations

    • Alternatively, install the Mosquitto package on your local machine, then open a terminal. Run the following command (replacing [unique_topic_name] with the topic used when deploying the MobileNet model to the node) to subscribe to the public broker and see prediction messages appearing in the terminal window:

      mosquitto_sub -h broker.hivemq.com -p 1883 -t [unique_topic_name]

  10. In order to remove the model and all other configuration from a node, select the desired device and click the Remove Content button. The target device will disconnect from the input feed, and stop running the AI model.