Creating a simple AI App

While the Wedge allows you to run an AI model on a node using the App Control UI, you may wish to create your own AI app that can include custom pre- and post-processing code.

In this tutorial you will create a minimal AI application that:

  • connects to an RTSP stream,
  • runs an ONNX model for classifying the frames, and
  • writes the output to a topic of an MQTT Broker.

The model that we’ve chosen to use for this tutorial is the GoogleNet ONNX model, one of many models trained to classify images based on the 1000 classes of ImageNet.

Prerequisites

This tutorial assumes that:

  • You know the basics of Python and have pip installed.
  • You have Docker installed.
  • You have access to an RSTP video stream that provides the input images.

Create the application

In order to get started, you first need to setup a new directory where you will in turn create and store the tutorial files. So first create a new directory locally and name it minimal_ai.

The main file of the minimal AI application will do the heavy lifting:

  • connecting to an RTSP stream,
  • downloading and running an ONNX model for classifying the frames,
  • matching the inferences to a downloaded set of classes, and
  • writing the class names to an MQTT Broker topic.

It will take in three command line arguments:

  • the URL of the RTSP stream,
  • the URL of the MQTT Broker, and
  • the name of the MQTT topic.

The following is an example of this application. Download or copy it into the minimal_ai directory as minimal_ai_app.py.

# file: minimal_ai/minimal_ai_app.py

import sys
import rtsp
import onnxruntime as ort
import numpy as np
import paho.mqtt.client as mqtt
import requests
from preprocess import preprocess

if __name__ == '__main__':

    # python3 minimal_ai_app.py <url of RTSP stream> <url of MQTT Broker> <MQTT topic>

    if len(sys.argv) != 4:
        raise ValueError("This demo app expects 3 arguments and has %d" % (len(sys.argv) - 1))

    # Load in the command line arguments
    rtsp_stream, mqtt_broker, mqtt_topic = sys.argv[1], sys.argv[2], sys.argv[3]

    # Download the model
    model = requests.get('https://github.com/onnx/models/raw/main/vision/classification/inception_and_googlenet/googlenet/model/googlenet-12.onnx')
    open("model.onnx" , 'wb').write(model.content)
    
    sess_options = ort.SessionOptions()
    sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
    sess_options.optimized_model_filepath = "optimized_model.onnx"

    session = ort.InferenceSession("model.onnx", sess_options)
    inname = [input.name for input in session.get_inputs()]

    # Download the class names
    labels = requests.get('https://raw.githubusercontent.com/onnx/models/main/vision/classification/synset.txt')
    open("synset.txt" , 'wb').write(labels.content)
    with open("synset.txt", 'r') as f:
        labels = [l.rstrip() for l in f]

    # Connect to the MQTT Broker
    mqtt_client = mqtt.Client()
    mqtt_client.connect(mqtt_broker)
    mqtt_client.loop_start()

    # Connect to the RTSP Stream
    rtsp_client = rtsp.Client(rtsp_server_uri = rtsp_stream)
    while rtsp_client.isOpened():
        img = rtsp_client.read()
        if img != None:

            img = preprocess(img)
            preds = session.run(None, {inname[0]: img})
            pred = np.squeeze(preds)
            a = np.argsort(pred)[::-1]
            print(labels[a[0]])
            mqtt_client.publish(mqtt_topic, labels[a[0]])

    rtsp_client.close()
    mqtt_client.disconnect()

This simple application can almost run by itself, except you need to make sure that the input frames are preprocessed in the way that the model expects. In the case of the GoogleNet ONNX model, you can use the preprocess function available online. Therefore, next create the file preprocess.py in the minimal_ai directory, copy the preproccess function, and import numpy at the top of the file:

# file: minimal_ai/preprocess.py
# from https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet#obtain-and-pre-process-image

import numpy as np

# Pre-processing function for ImageNet models using numpy
def preprocess(img):
    '''
    Preprocessing required on the images for inference with mxnet gluon
    The function takes loaded image and returns processed tensor
    '''
    img = np.array(img.resize((224, 224))).astype(np.float32)
    img[:, :, 0] -= 123.68
    img[:, :, 1] -= 116.779
    img[:, :, 2] -= 103.939
    img[:,:,[0,1,2]] = img[:,:,[2,1,0]]
    img = img.transpose((2, 0, 1))
    img = np.expand_dims(img, axis=0)

    return img

Test the application

With the preprocessing file and function added, you can test this python app locally with the following commands, replacing [your-rtsp-stream-url] with the public RTSP stream and [your-mqtt-topic] with a suitable topic:

For this tutorial we have chosen to use the public MQTT broker provided by broker.hivemq.com. Therefore, please choose a unique topic name for your demo setup, since the public MQTT broker can have other users also connecting to it and publishing messages.

# install the required packages
pip install onnxruntime rtsp numpy paho-mqtt requests

# run the demo application
python minimal_ai_app.py [your-rtsp-stream-url] broker.hivemq.com [your-mqtt-topic]

Running this command should download the model and classnames, connect to the RTSP stream, and run the AI model on the frames after applying preprocessing. Based on the model predictions, the most likely image class from ImageNet should be printed in the terminal, and published to the MQTT Broker and topic.

Access the broker in the browser

Since we are using the public MQTT broker from HiveMQ you can view the results of your deployed model’s predictions with the online MQTT Client from HiveMQ. To do so, complete the following steps.

  1. Navigate to the MQTT Client in your browser.
  2. Select Connect, then Add New Topic Subscription.
  3. Enter the same value for [your-mqtt-topic] that you used when running minimal_ai_app.py, then click Subscribe.

Access the broker in the terminal

Alternatively, you can install the Mosquitto package on your local machine, then open a terminal. By running the command below, and replacing [your-mqtt-topic] with the topic used when running minimal_ai_app.py, you will subscribe to the public broker and should see prediction messages appearing in the terminal window.

mosquitto_sub -h broker.hivemq.com -p 1883 -t [your-mqtt-topic]

Create the Docker Image

Now that your simple python application is hopefully running locally, you can containerise it using a Dockerfile. The following Dockerfile example performs the following:

  • defines an image based on python:3.9,
  • runs pip install to install the required packages, and
  • defines the command to run when the container starts. In this case, the command is similar to the one used above to test locally, but the command line arguments are provided by environment variables that will be set when running the container.
  1. Download the following in the minimal_ai directory as a file called Dockerfile:

    # file: minimal_ai/Dockerfile
    
    FROM python:3.9
    
    WORKDIR /usr/src/app
    
    RUN apt-get update && \
        apt-get install ffmpeg libsm6 libxext6 -y
    
    RUN pip install --no-cache-dir --upgrade pip && \
        pip install --no-cache-dir onnxruntime rtsp numpy paho-mqtt requests
    
    COPY *.py .
    
    CMD python -u minimal_ai_app.py $RTSP_STREAM $MQTT_BROKER $MQTT_TOPIC
  2. With the Dockerfile created, you can build a local version of the image and name it minimal-ai-app by running the following command in the minimal_ai directory:

    docker build -t minimal-ai-app .
    
  3. Once the image has been created, you can run it locally using docker to check that everything was defined correctly. The following command runs the minimal_ai image (replace [your-rtsp-stream-url] with the public RTSP stream and [your-mqtt-topic] with a suitable topic).

    Please choose a unique topic name for your demo setup, since the public MQTT broker can have other users also connecting to it and publishing messages.

    docker run -d \
        -e RTSP_STREAM=[your-rtsp-stream-url] \
        -e MQTT_BROKER=broker.hivemq.com \
        -e MQTT_TOPIC=[your-mqtt-topic] \
        --name minimal-ai-container minimal-ai-app
    

Create the Great Bear Application Package (GBAP)

  1. With the Dockerfile created, you can now move onto creating the GBAP, used to publish your application to the Great Bear Application Store for subsequent deployment to Great Bear managed sites through the Great Bear Dashboard.

  2. Follow the steps to install and configure the Great Bear Packaging SDK tool.

  3. Use the SDK tool to create a new GBAP instance within your minimal_ai application directory.

    gbear app create minimal_ai/GBAP
    

    Enter the chart name, name of the application, and the description of the application when prompted:

    Creating a GBAP with required fields....
    Enter the chart name:
    minimal-ai-app
    Enter the chart version [SemVer2 MAJOR.MINOR.PATCH], (leave empty for default: 0.0.1):
    
    Enter the application name:
    Minimal AI App
    Enter the application description:
    A minimal AI application running on Great Bear
    Successfully created a GBAP in path minimal_ai/GBAP
    
  4. The SDK tool has now created a boiler plate GBAP filesystem tree:

    minimal_ai/GBAP
    ├── Chart.yaml
    ├── gbear
    │   └── appmetadata.yaml
    

Author the GBAP Assets

The GBAP contains the following two fundamental elements:

  1. A Helm Chart to describe a set of Kubernetes resources used to deploy the application at Great Bear managed sites.
  2. Great Bear specific application metadata (gbear/appmetadata), containing:
    • Properties used to render the application in the Great Bear Application Store
    • Properties used to define application parameters which can be overridden when deploying the Helm Chart.

For further details on the GBAP structure, see Great Bear Application Package Guide

Extend the GBAP Helm Chart

The final things that you need to build are the elements of the Helm Chart for deploying the container image to Great Bear sites as an application. Helm Charts describe Kubernetes applications and their components: rather than creating YAML files for every application, you can provide a Helm chart and use Helm to deploy the application for you. Therefore, the next steps are to create a very basic Helm Chart that will contain a template for the Kubernetes resource that will form your application, and a values file to populate the template placeholder values.

A Chart.yaml file is required for any Helm Chart, and contains high level information about the application (you can find out more in the Helm Documentation). You can set a number of Great Bear specific parameters in the Helm chart of your application.

  1. The Packaging SDK has created the root Chart file Charts.yaml within the GBAP filesystem tree.

    apiVersion: v2
    name: minimal-ai-app
    version: 0.0.1
    
  2. The next step is to create a directory called templates inside the GBAP directory, this is where you will create the template file for our application. When using Helm to install a chart to Kubernetes, the template rendering engine populates the files in the templates directory with the desired values for the deployment.

  3. In the new templates directory, create a file called _helpers.tpl, which will hold some template helpers that you can re-use throughout the chart. Your _helpers.tpl file should contain the following helpers:

    # file: minimal_ai/chart/templates/_helpers.tpl
    
    {{/*
    Expand the name of the chart.
    */}}
    {{- define "app.name" -}}
    {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Create a default fully qualified app name.
    We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
    If release name contains chart name it will be used as a full name.
    */}}
    {{- define "app.fullname" -}}
    {{- if .Values.fullnameOverride }}
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- $name := default .Chart.Name .Values.nameOverride }}
    {{- if contains $name .Release.Name }}
    {{- .Release.Name | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
    {{- end }}
    {{- end }}
    {{- end }}
    
    {{/*
    Create chart name and version as used by the chart label.
    */}}
    {{- define "app.chart" -}}
    {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Common labels
    */}}
    {{- define "app.labels" -}}
    helm.sh/chart: {{ include "app.chart" . }}
    {{ include "app.selectorLabels" . }}
    {{- if .Chart.AppVersion }}
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
    {{- end }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    {{- end }}
    
    {{/*
    Selector labels
    */}}
    {{- define "app.selectorLabels" -}}
    app.kubernetes.io/name: {{ include "app.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    {{- end }}
    
    {{/*
    Create the name of the service account to use
    */}}
    {{- define "app.serviceAccountName" -}}
    {{- if .Values.serviceAccount.create }}
    {{- default (include "app.fullname" .) .Values.serviceAccount.name }}
    {{- else }}
    {{- default "default" .Values.serviceAccount.name }}
    {{- end }}
    {{- end }}
    
  4. The next file you need in the templates directory is configmap.yaml. ConfigMaps can be used to store key-value data that application pods can then consume as environment variables, command-line arguments, or as configuration files in a volume.

    Inside the templates directory create a file called configmap.yaml, with the following content:

    # file: minimal_ai/chart/templates/configmap.yaml
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: {{ .Values.name }}
    data:
      RTSP_STREAM: |
            {{ .Values.config.RTSP_STREAM }}
      MQTT_BROKER: |
            {{ .Values.config.MQTT_BROKER }}
      MQTT_TOPIC: |
            {{ .Values.config.MQTT_TOPIC }}
  5. The final file that you need to create inside the templates directory is minimal-ai-app-deployment.yaml, this is where you will create the template file for the application and describe the applications desired state.

    Create the file minimal-ai-app-deployment.yaml inside the templates directory, with the following content:

    # file: minimal_ai/chart/templates/minimal-ai-app-deployment.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "app.fullname" . }}
      labels:
        {{- include "app.labels" . | nindent 4 }}
    spec:
      replicas: {{ .Values.replicaCount }}
      selector:
        matchLabels:
          {{- include "app.selectorLabels" . | nindent 6 }}
      template:
        metadata:
          labels:
            {{- include "app.selectorLabels" . | nindent 8 }}
        spec:
          {{- with .Values.imagePullSecrets }}
          imagePullSecrets:
            {{- toYaml . | nindent 8 }}
          {{- end }}
          containers:
            - name: {{ .Values.name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
                - name: RTSP_STREAM
                  valueFrom:
                    configMapKeyRef:
                      name: {{ .Values.name }}
                      key: "RTSP_STREAM"
                - name: MQTT_BROKER
                  valueFrom:
                    configMapKeyRef:
                      name: {{ .Values.name }}
                      key: "MQTT_BROKER"
                - name: MQTT_TOPIC
                  valueFrom:
                    configMapKeyRef:
                      name: {{ .Values.name }}
                      key: "MQTT_TOPIC"
          restartPolicy: Always
    

    The parts of the deployment file that are enclosed in {{ and }} blocks, such as {{ .Values.name }}, are called template directives. The template directives will be populated by the template rendering engine, and in this case look for information from the values.yaml file - which contains the default values for a chart.

  6. The final Helm component that you have to create is the values.yaml file, which you should create in the chart directory. Inside the values.yaml file you need to define default the values for the template directives in the deployment file. It also defines the image repository location, and you should make sure to replace <tenant-id> in the repository value to the tenant ID communicated to you by Cisco. Copy the following content into the file:

    # file: minimal_ai/chart/values.yaml
    
    replicaCount: 1
    
    name: "minimal-ai"
    
    image:
        repository: repo.greatbear.io/{{< param "tenantId" >}}/minimal-ai
        tag: 0.0.1
        pullPolicy: Always
    
    imagePullSecrets:
      - name: gbear-harbor-pull
    
    nameOverride: ""
    fullnameOverride: "minimal-ai"
    
    # Config parameters described in application metadata
    # and their values will be put here by the GB-deployer
    config:
      # The url of the RTSP stream
      RTSP_STREAM: ""
      # The host of the MQTT Broker
      MQTT_BROKER: ""
      # The topic for inferences
      MQTT_TOPIC: ""
    

Customise the Great Bear Application Metadata

  1. The SDK tool has generated the following boiler plate gbear/appmetadata.yaml file, the application metadata file to include the following properties, allowing the user to set their own values for RTSP_STREAM, MQTT_BROKER and MQTT_TOPIC when they deploy the app via the Application Store.

    name: Minimal AI App
    description: A minimal AI application running on Great Bear
    configuration:
      - name: RTSP_STREAM
        title: The url of the RTSP stream
        type: String    
      - name: MQTT_BROKER
        title: The host of the MQTT Broker
        type: String
      - name: MQTT_TOPIC
        title: The topic for inferences
        type: String

Validate the Application Package

By following these steps you should now have a complete application, including the python code that connects to an RTSP stream, runs an ONNX model for classifying the frames, and writes the output to a topic of an MQTT Broker, along with the associated Dockerfile and Helm Chart.

  1. Before moving on to building and uploading the docker image and publishing the Helm Chart to Great Bear, you can first test that everything is setup correctly using the command:

    helm install minimal-ai GBAP/ --dry-run --debug
    
  2. Now that you have finished authoring our GBAP assets and tested the embedded Helm Chart works, use the SDK tool to perform a final validation of the complete GBAP contents:

    gbear app validate minimal-ai/GBAP
    

    Expected output:

    Validating Package: charts/
    -- Validating Application Metadata...
    -- Validating Chart Dependencies...
    -- Validating Kubernetes Resource Labels...
    Validation Complete:
    Application Metadata: Passed (Errors: 0, Warnings: 0)
    Chart Dependencies: Passed (Errors: 0, Warnings: 0)
    Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
    

For further details on GBAP validation, see Validate an Application Package

Assuming that the command above didn’t give errors, you can move onto uploading the Minimal AI app to Great Bear by Publish an Application Package.