Wedge

The Wedge App provides the ability to run AI workloads at the edge. The user can configure these workloads remotely by using the App Control UI. For example, the user can specify the video input for the app, the model which should be ran, where inferences should be written to, as well as set various pre- and postprocessing parameters.

Moreover, edge devices typically have limited resources in terms of available memory and CPU, which can create significant performance issues for AI applications. In particular, models may not fit on a single device, or their performance can fall below target levels. Therefore the Wedge app provides the ability to run distributed AI workloads at the Edge - splitting AI models and deploying them across multiple devices. Based on node parameters and network topology, the Wedge AI SDK can profile the model and determine the optimal way to split it between the devices in order to improve performance.

For example, if you are running an AI model on a camera, then the size of the model is limited based on the available hardware memory, and the inference speed performance of the model (in terms of FPS) can become slow with more complex models. With Wedge, you can split the model and run a part of it on the camera, and other parts of it on other edge nodes. That way you can run bigger, better models with higher frame rates, without having to change the camera hardware.