Deployment
Prerequisites
To deploy Wedge, you should also have access to the App Control UI to interact with your Wedge deployments: you will need the URL of the Control UI of your organization, and your username and password.
Deploying Wedge
-
To deploy the Wedge application to a site, you need an API key so that you can control the Wedge App from the App Control UI. To retrieve this key, complete the following steps:
-
Open and log in to the App Control UI in your browser:
-
Click the Generate API Key button on the side menu, and copy the token that is displayed on the screen.
-
-
Deploy the Wedge application to a site, and configure Wedge:
-
Paste the API key into the deployment parameters. That way the Wedge application deployed at the edge can communicate with the App Control UI. This allows you to control remotely the video input for the app, the model which should be ran, where inferences should be written to, and define various pre- and postprocessing parameters. (For details on using the App Control UI, see this section.)
-
(Optional) Configure other parameters as needed:
- Enable Monitoring: Enable monitoring of the Wedge application.
-
Running AI Models using the App Control UI
Once you have successfully deployed Wedge to the nodes in your site, you can use the App Control UI to configure each node’s desired AI workload parameters. The steps to do so are:
-
Open and log in to the App Control UI in your browser:
-
The ‘Wedge Nodes’ section of the App Control UI lists the devices that are available to run AI workloads
Only the sites and devices that are currently online are shown. Those that are shutdown or unable to be reached due to network connection issues aren’t displayed in the App Control UI.
The following information is shown about each device:
- Node Name: The name of the device
- Site Name: The site where the device is located
- Input: The URL that the device is currently using as video input
- Output: The URL to where the device is currently writing inference output
- Model: The URL of the model that the device is currently running
- Actions: Buttons to edit or remove content displayed on the device
To visualize which devices are located at the same site, select Overview to see a tree like view of the available nodes and sites.
-
Use the App Control UI to change the AI workload parameters that a device is running:
- Select the
icon of the device you want to configure. If you have many devices, you can use the Search field to find them.
-
Into the Input field, enter the source URL of the video feed for your model. You can use local (on-premises) and public video sources that are accessible using RTSP. Note that the target device must be able to access the display source at the network level.
-
For the Output field, enter the MQTT URL of the broker where the device should write AI model inference to. Note that the target device must be able to access the broker at the network level.
If running split models, the Input and Output fields can also be the TCP input/output of another node.
-
Set the source URL of the model you wish the device to run in the Model field. The Wedge will download the model from this URL, or AWS and MinIO S3 bucket.
-
As many AI models require standard preprocessing of images from the video input, you can enable the Preprocess toggle switch, and adjust the parameters as needed for your environment.
-
If the model is an image classifier and you wish to label the inference as a specific class, then you can enable the Postprocess toggle switch and specify the URL of a .txt file of the classes. The Wedge will download the file from this URL, or AWS and MinIO S3 bucket.
- Finally, click the SEND button to push your changes to the target device. The device will automatically receive the new AI workload parameters, and will download the required files, connect to the input feed, and start running the desired AI model.
- Select the
-
In order to remove the model and all other Wedge configuration from a node, select the desired device and click the Remove Content button. The target device will disconnect from the input feed, and stop running the AI model.
Both the Bulk Update Content and Bulk Remove Content buttons can be used with more than one target device selected.
Examples
For an example of how to deploy a whole model with Wedge, see Deploying an AI Model to Great Bear.