This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Great Bear

Welcome to Great Bear - Apps at the Edge as a Service​

Bringing the Cloud-Native Development and Operations Experience to the Edge

We are developing a SaaS-based solution to simplify the deployment and operation of next-generation applications at the edge across large numbers of sites, for developers and IT users, supported with intuitive SDKs and ready-to-use seed applications. This solution optimizes edge applications, data security and real-time experience acceleration.

Learn more about Great Bear

1 - Great Bear Documentation

Welcome to Great Bear - the simple and efficient SaaS-based application for managing operations and modern application capabilities at the edge.

Great Bear simplifies the deployment and operation of next-generation data-heavy and AI applications at-the-edge for developers and IT users, supporting them with intuitive SDKs and ready-to-use apps.

We bring the experience of the cloud for users and developers to on-prem apps - and offer this experience as a service. Great Bear extends the flexibility, easiness (serverless, automation, observability) and rich ecosystem (AppStores) of the cloud to applications that need to be partly or fully deployed on-premise for performance, survivability, legal or policy reasons. Overall, our design goals include:

Large-scale deployment - with Great Bear you can roll out applications, real-time updates, and security policies across thousands of sites in a simple, automated way.

Optimized workload placement - Great Bear can optimally deploy your computation-heavy tasks in constrained environments​ that have limited compute power, RAM, and storage, in a way that can comply with location-related policies (for example, GDPR).​

SLO and cost optimization of ​data processing and storage - Great Bear can balance your requirements with cost through SLO-driven deployment​.

Heterogenous device support - Great Bear supports a great variety of x86 and ARM-based devices suitable for small to large, enterprise-grade deployments (Intersight).

Loosely coupled,​ autonomous edges - the declarative, federated model of Great Bear combines resiliency with scalability. Your sites continue to operate when the WAN connectivity is lost.​


The following sections show you the most important aspects of using Great Bear.

1.1 - Overview

The adoption of cloud technologies and cloud-based services is rising constantly, but there are many situations where you need to run an application locally, for example, because of regulatory, security or performance reasons. Great Bear gives you the ability to develop and run modern applications at the edge by bringing the resilience and flexibility of the cloud to the edge.

Great Bear is a management system for edge devices, and focuses on application development, deployment, and lifecycle management. Great Bear focuses on the user experience, hiding the complexity of cloud-native technologies behind intuitive and accessible user interfaces and processes.

Even though edge devices often have limited resources, Great Bear is designed from the ground up with scalability in mind, allowing you to manage application deployments across several nodes per sites easily, even across thousands of sites.

Great Bear provides different layers to manage the different aspects of the solution:

  • A Software-as-a-Service control plane that allows you to manage and control your sites, devices, and deployed applications.
  • An application store for easy access of the applications you can deploy on your sites.
  • A cloud-type SDK and API that allows you and also Independent Software Vendors to rapidly develop applications for the edge, and make them accessible in the application store.

In Great Bear, a Node is a physical or virtual edge device that can run Kubernetes. You can group the nodes into logical units called Sites. For example, you can create sites based on the physical location of your facilities and assign every node to the site where it is located.

The Great Bear control plane is a Software-as-a-Service (SaaS) solution that is running in the cloud and communicates with your sites, allowing you to manage your Great Bear nodes, sites, and applications. The control plane provides access to the application marketplace, where you can select the applications you want to deploy to one or more of your sites. Great Bear takes care of deploying and running the applications on the selected sites and keeps them running, for example, redeploys applications if a node fails, and optimally manages the resource requirements of the applications on the available nodes.

If you want to develop your own applications, Great Bear provides an SDK that allows you to access services that are usable by the applications, and a way to upload your applications to the application store.

For more details, see Technical Overview.

1.1.1 - Technical Overview

Applications and Sites are central concepts in Great Bear, meaning that the whole system has been wrapped around the concept of Edge Application. Great Bear supports the full life cycle of an application from its creation by an external ISV up to the deployment on edge sites and run-time monitoring of the said applications.

Great Bear is built on Kubernetes, and it brings cloud-native best practices to your edge devices, while hiding the complexities behind intuitive web UIs. However, Great Bear is not a generalization of Kubernetes, nor a new Kubernetes management system.

In a classical Kubernetes cluster, there is one leader (or a set of leaders) and one or several worker nodes (followers). Kubernetes pods can be scheduled on worker nodes or on the leader(s) under the control of the Kubernetes scheduler. Kubernetes defines “states” and tries to keep the global system at the defined “states”. Kubernetes can therefore be seen as an intent-driven system.

While Kubernetes takes care of Pods (the atomic unit of Kubernetes deployments), Great Bear takes care of applications deployed on sites, totally abstracting and hiding all Kubernetes complexity.

The Great Bear control plane exposes all of the Great Bear functions and services, which can be accessed through a dedicated UI dashboard or through a well-documented REST API.

Great Bear architecture overview Great Bear architecture overview

Great Bear can be globally divided in two main parts:

  • The Great Bear control plane sitting in a public cloud and exposing a public northbound API as well as a public southbound API.
  • The Edge sites (potentially millions of them) which can be widely distributed.

Since edge sites might loose the connection to the Great Bear Control Plane, we cannot assume that we can always directly control edges from the Control Plane. To address this issue, Great Bear is based on a fully intent-based declarative model (promise theory). The way it works is as follows:

  • The Control Plane maintains a digital image of each edge. Any action triggered through the northbound API and affecting edges first updates the corresponding digital image.
  • The edges regularly poll the Control Plane (southbound API) to check their digital image. If any change happened since the last poll, the Control Plane delivers this change to the edge. Then the edge locally applies this change.

Great Bear allows to drive edges without having any permanent connection to them. That is to say that real edges eventually converge toward their equivalent digital image. In other words, the Great Bear northbound API is not used to directly manipulate the real edge, it instead allows to describe desired state of the edge, which will be eventually replicated on the real edge sites.

The Great Bear control plane implements all the functions required to deploy applications on Great Bear edge sites, as well as the functions required to control and operate those edge sites.

1.2 - Quick Start Guide

Getting started with Great Bear is easy! You can get going in two steps:

  1. Firstly, prepare your edge compute node. Learn how to Prepare a node.
  2. Then, manage it from the cloud. Learn how to Start using your node.

1.2.1 - Prepare a node

First of all, you’ll need to prepare a node. Nodes are the building blocks of the Great Bear edge. You can read more about nodes in Technical Overview.

The following kinds of nodes are supported:

1.2.1.1 - Getting started with Auvidea NVIDIA Jetson TX2 NX

JNX30M carrier board JNX30M carrier board

Prerequisites

To get started with the Auvidea NVIDIA Jetson TX2 NX you will need:

  • A 12V 3A DC power supply or PoE switch.
  • Access to the Internet via wired Ethernet connection.
  • Access to the Internet needs to be via IPv4.

Steps

  1. Having received the Auvidea NVIDIA Jetson TX2 NX, make a note of the serial number - this is the HARDWARE IDENTIFIER of the node and it will be used during the registration process.

  2. Next, connect the TX2 NX to power and Ethernet. Make sure there is access to the Internet via IPv4.

  3. That’s it! Next you can Start using your node.

Further information

The Auvidea NVIDIA Jetson TX2 NX is shipped with the latest available Great Bear OS and software stack installed. The Great Bear support team will announce any required updates to the node image as part of our on-going maintenance life-cycle. Node image updates can be performed remotely over the air by our support team, coordinated through a schedule aligned with node operational considerations and connectivity to the internet. On occasions, it maybe necessary to update the device locally through the NVIDIA USB protocol to flash the device with an image supplied to you by the Great Bear support team.
Please refer to Flashing NVIDIA Image

1.2.1.1.1 - Flashing Auvidea NVIDIA Jetson TX2 NX

Please follow these instructions when the GB Support team have requested you to update your Auvidea NVIDIA Jetson TX2 NX image locally through the NVIDIA USB flashing protocol.

Pre-requisites

  1. Local physical access to the Auvidea NVIDIA Jetson TX2 NX
  2. USB A-Male to Micro B cable which is capable of carrying data (not just for power as you get with most consumer electronics).
  3. x86 Laptop or PC, with a USB port, and running a compatible version of the Ubuntu Linux distribution. At the time of documenting these steps have been confirmed using Ubuntu 22.04.1 LTS (Jammy Jellyfish), although other Ubuntu versions maybe compatible.

Hosting Ubuntu

  • The best option for stability and performance is to operate Ubuntu directly as the bear metal OS hosted on the Laptop/PC.

  • Using a guest VM for Ubuntu is not recommended. There are documented issues regarding the various VM configurations required to successfully support USB pass-through from the host to guest VM, this stems from the repeated disconnects and reconnects encountered as part of the flashing procedure.

    If the bear metal Ubuntu OS is not an option, for reference, the Great Bear support team have successfully flashed the Auvidea device via a VM using the following approach:

    • Host: Intel x86 Mac running Monterey 12.5.1 (Note: this same approach does not work on host Windows OS installations)
    • Guest VM: Virtual Box, Ubuntu (64 bit) VM built from the Ubuntu 22.04.1 ISO server image.
    • Guest VM Configuration:
      • VM Ports - With the Auvidea device connected to the host OS USB port, configure the VirtualBox VM settings to Enable USB Controller and select NVIDIA Corp.APX from the available list.
      • VM OpenSSH server - Ensure the VM has OpenSSH Server installed and running to support an ssh connection from the host to the VM.
    • From the host, ssh into the VM.

Flashing Steps

  1. Open a command terminal prompt on the Ubuntu OS.
  2. Using the credentials and URL provided by the Great Bear support team, download the new device image:
curl -LOJk --user <username> https://files.greatbear.io/dist/gbeos/tegra/latest/<image-file-name>.tar.gz
Enter host password for user <username>:
  1. Extract the image archive:
tar -xvf <image-file-name>.tar.gz
  1. Change directory to the path of the extracted archive, this directory will contain a doflash.sh file.
  2. Power down the Auvidea device.
  3. Use the USB cable to connect the Auvidea device to the x86 computer running Ubuntu.
  4. Power up the Auvidea device with the cable connected, the device will automatically enter into a recovery mode.
  5. From the Ubuntu terminal, run the command: lsusb and confirm the resulting output lists a USB device called NVidia Corp
lsusb
Bus 002 Device 001: ID ... Linux Foundation 3.0 root hub
Bus 001 Device 003: ID ... NVidia Corp.
  1. Trigger the flashing of the Auvidea device by executing the command:
sudo ./doflash.sh
  1. After a few minutes of flashing the EMMC, the Auvidea device will reboot, and you should see the following success message:
[ ] ...
[ ] Flashing completed

[ ] Coldbooting the device
[ ] tegradevflash_v2 --reboot coldboot
[ ] Bootloader version 01.00.0000
  1. The next steps depend upon the type of image the Great Bear support team have provided, this will be communicated by the support team as part of supplying the image url.
  • Device On-boarding image: Contact the support team to confirm the image has been flashed successfully and to trigger the next on-boarding steps.
  • Standard Great Bear Software update image: The device will reboot and resume from the previous operational state.

If you experience any problems with these steps, then please contact the Great Bear Support Team.

1.2.1.1.2 - Getting familiar with Auvidea NVIDIA Jetson TX2 NX

Hardware Overview

Jetson TX2 NX module

Jetson TX2 NX Module Jetson TX2 NX Module

Specifications:

AI performance 1.33 TFLOPS
GPU NVIDIA Pascal™ Architecture GPU with 256 CUDA cores
CPU Dual-core NVIDIA Denver 2 64-bit CPU and quad-core ARM A57 Complex
RAM 4GB 128-bit LPDDR4, 1600 MHz - 51.2 GBs
Storage 16GB eMMC 5.1 Flash Storage

For more details, see the Jetson TX2 NX Module product page.

Auvidea board

The JNX30M carrier board looks like this:

JNX30M carrier board

For more details, see the Auvidea product page.

GBEOS

Content

  • Linux For Tegra: l4t-32.7.1
  • Linux kernel: 4.9.253-l4t-r32.7
  • NVIDIA JetPack: 4.6.1
  • NVIDIA Cuda: 10.2.300
  • k3s: v1.22.6

Using or building AI/ML image containers for the Jetson TX2 NX

Since the GBEOS distribution aligns with Linux for Tegra r32.7.1 and JetPack 4.6.1, the container images provided by NVIDIA can be used at runtime, or as a starting point for building new images. You can find the image catalog at the NVIDIA NGC website.

Compatible images contain the r32.7.1 string. For example, for a compatible tensorflow 2.7.0 image, the appropriate image would be: l4t-tensorflow:r32.7.1-tf2.7-py3.

Post-installation steps

root password

The root user has no password. Set a password to protect administrator access to the box. Complete the following steps.

  1. Open a terminal on your workstation and connect to the box using SSH:

    ssh root@<box_ip_address>
    
  2. After the connection is established, run the passwd command, and enter a new password.

    Expected output:

    New password:******
    Retype new password:*****
    passwd: password updated successfully
    

Power mode

The Jetson TX2 NX module supports several power modes. The default power mode provides performance profile.

The nvpmodel tool allows to set the preferred power mode.

Default power model is 0. Run the nvpmodel -q command to display the current power model.

NVPM WARN: fan mode is not set!
NV Power Mode: MAXN
0

This model brings the best performance level. All CPU cores are enabled. The 4 Cortex A57 are enabled as well as the 2 Denver Cores, as shown in the output of the lscpu command:

Architecture:           aarch64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Little Endian
CPU(s):                 6
  On-line CPU(s) list:  0-5
Vendor ID:              ARM
  Model name:           Cortex-A57
    Model:              3
    Thread(s) per core: 1
    Core(s) per socket: 1
    Socket(s):          1
    Stepping:           r1p3
    CPU max MHz:        2035.2000
    CPU min MHz:        345.6000
    BogoMIPS:           62.50
    Flags:              fp asimd evtstrm aes pmull sha1 sha2 crc32
  Model name:           Denver 2
    Model:              0
    Thread(s) per core: 1
    Core(s) per socket: 2
    Socket(s):          1
    Stepping:           0x0
    CPU max MHz:        2035.2000
    CPU min MHz:        345.6000
    BogoMIPS:           62.50
    Flags:              fp asimd evtstrm aes pmull sha1 sha2 crc32
  Model name:           Cortex-A57
    Model:              3
    Thread(s) per core: 1
    Core(s) per socket: 3
    Socket(s):          1
    Stepping:           r1p3
    CPU max MHz:        2035.2000
    CPU min MHz:        345.6000
    BogoMIPS:           62.50
    Flags:              fp asimd evtstrm aes pmull sha1 sha2 crc32
Caches (sum of all):
  L1d:                  128 KiB (6 instances)
  L1i:                  192 KiB (6 instances)
  L2:                   2 MiB (2 instances)

Note: Power model 0 configures the cores max frequencies to their maximal nominal value. In this mode all the cores can operate at their max frequencies. (You can check the maximum frequency of the cores by running cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq). However, in the absence of a cooling fan this power level may lead to overheating under heavy loads.

To set a different power model, run the nvpmodel -m <number> command. For example, to set power model 2, run nvpmodel -m 2

Power model 2 enables all the cores, balancing the power consumption by capping the clock frequencies to lower values, as you can see on the lscpu output:

Architecture:           aarch64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Little Endian
CPU(s):                 6
  On-line CPU(s) list:  0-5
Vendor ID:              ARM
  Model name:           Cortex-A57
    Model:              3
    Thread(s) per core: 1
    Core(s) per socket: 1
    Socket(s):          1
    Stepping:           r1p3
    CPU max MHz:        2035.2000
    CPU min MHz:        345.6000
    BogoMIPS:           62.50
    Flags:              fp asimd evtstrm aes pmull sha1 sha2 crc32
  Model name:           Denver 2
    Model:              0
    Thread(s) per core: 1
    Core(s) per socket: 2
    Socket(s):          1
    Stepping:           0x0
    CPU max MHz:        2035.2000
    CPU min MHz:        345.6000
    BogoMIPS:           62.50
    Flags:              fp asimd evtstrm aes pmull sha1 sha2 crc32
  Model name:           Cortex-A57
    Model:              3
    Thread(s) per core: 1
    Core(s) per socket: 3
    Socket(s):          1
    Stepping:           r1p3
    CPU max MHz:        2035.2000
    CPU min MHz:        345.6000
    BogoMIPS:           62.50
    Flags:              fp asimd evtstrm aes pmull sha1 sha2 crc32
Caches (sum of all):
  L1d:                  128 KiB (6 instances)
  L1i:                  192 KiB (6 instances)
  L2:                   2 MiB (2 instances)

To help investigate the different configurations, you can download and use the following tool:

curl -LOJ https://raw.githubusercontent.com/piaoling199/TX2-notes/master/sources/jetson_clocks.sh && chmod +x jetson_clocks.sh

To visualize the current configuration for the clock frequencies, run ./jetson_clocks.sh --show

Expected output:

SOC family:tegra186  Machine:lanai-3636
Online CPUs: 0-5
CPU Cluster Switching: Disabled
cpu0: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=806400
cpu1: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=345600
cpu2: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=345600
cpu3: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=1881600
cpu4: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=1420800
cpu5: Gonvernor=schedutil MinFreq=345600 MaxFreq=2035200 CurrentFreq=1728000
GPU MinFreq=114750000 MaxFreq=1300500000 CurrentFreq=114750000
EMC MinFreq=204000000 MaxFreq=1600000000 CurrentFreq=1600000000 FreqOverride=0
Can't access Fan!

This reveals that the min, max, and current frequencies differ because the frequencies are adjusted on-demand. While this is reasonable considering the power consumption and hence the thermal dissipation optimizations (given the fact that the board operates in a fanless box as indicated by the last output line), it may in some cases lower the deterministic performances of certain workflows.

Executing the tool without any parameter configures the clocks for best performance. Run the following commands:

./jetson_clocks.sh
./jetson_clocks.sh --show

Expected output shows that the clock frequency values have all been set to the maximal and nominal values:

SOC family:tegra186  Machine:lanai-3636
Online CPUs: 0-5
CPU Cluster Switching: Disabled
cpu0: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu1: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu2: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu3: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu4: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
cpu5: Gonvernor=schedutil MinFreq=2035200 MaxFreq=2035200 CurrentFreq=2035200
GPU MinFreq=1300500000 MaxFreq=1300500000 CurrentFreq=1300500000
EMC MinFreq=204000000 MaxFreq=1600000000 CurrentFreq=1600000000 FreqOverride=1
Can't access Fan!

Kubernetes scheduling optimizations

TX2 NX Specifics

The TX2 NX comes with two core clusters:

  • 4 x ARM CORTEX-A57 cores
  • 2 x NVIDIA Denver core

Usage for these different cores may be determined per workload type. One good practice is to use the CORTEX cores for general purpose workloads and reserve the Denver cores for workloads requiring an exclusive access to the CPU. To this extent, the kernel command line excludes cores 1 and 2 (Denver cores) from its scheduler. This is achieved by using the kernel parameter isolcpus=1-2. These cores are used only by processes that explicitly require them.

This can be correlated to the Kubernetes QoS classes:

  • BestEffort
  • Guaranteed

Kubernetes (K3S) cpu manager configuration

The cpu manager policy is preconfigured to static and assigns cores 0, 3, 4 and 5 to the BestEffort QoS class. Cores 1 and 2 (Denver cores) are reserved for executing pods with the Guaranteed QoS class, hence getting an exclusive access to these cores.

Using the Guaranteed QoS class

Initial state: cat /var/lib/kubelet/cpu_manager_state

The output shows that all cores are available to the CPU manager.

{"policyName":"static","defaultCpuSet":"0-5","checksum":1946793818}

Create a pod with the BestEffort QoS class:

cat > l4t-be.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  name: l4t-be
spec:
  containers:
    - name: l4t
      image: nvcr.io/nvidia/l4t-base:r32.7.1
      command: ["/bin/bash"]
EOF

Apply the pod:

kubectl apply -f l4t-be.yaml

The l4t-be pod is now running in the BestEffort QoS class:

kubectl get po l4t-be -oyaml | grep qosClass

Expected output:

  qosClass: BestEffort

All CPU core remain available to the cpu manager:

cat /var/lib/kubelet/cpu_manager_state```

Expected output:

```json
{"policyName":"static","defaultCpuSet":"0-5","checksum":1946793818}

Create a pod with a Guaranteed QoS class:

cat > l4t-st.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
  name: l4t-st
spec:
  containers:
    - name: l4t
      image: nvcr.io/nvidia/l4t-base:r32.7.1
      command: ["/bin/bash"]
      resources:
        limits:
          cpu: 1
          memory: 1Gi
        requests:
          cpu: 1
          memory: 1Gi
EOF

Apply the pod:

kubectl apply -f l4t-st.yaml

The l4t-st pod is now running in the Guaranteed QoS class:

kubectl get po l4t-st -oyaml | grep qosClass
  qosClass: Guaranteed

The cpu manager has been updated:

cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"0,2-5","entries":{"2e317c46-9ee8-46c0-a4d6-ef550e54acb2":{"l4t":"1"}},"checksum":710351396}

The available cpuset is now "defaultCpuSet":"0,2-5". Core 1 has been excluded from the "defaultCpuSet":"0-5" list you have seen before deploying the l4t-st pod. Note that "entries":{"2e317c46-9ee8-46c0-a4d6-ef550e54acb2":{"l4t":"1"}} shows that core number 1 has been allocated to the l4t container from the l4t-st pod.

References

1.2.1.2 - Getting started with Raspberry Pi

Prerequisites

To create a Great Bear node from a Raspberry Pi, you will need:

  • A Raspberry Pi 4B with 8GB of memory.
  • The Raspberry Pi Imager application.
  • Access to the Internet either using Wi-Fi or a wired Ethernet connection.
  • Access to the Internet needs to be via IPv4.
  • A microSD card. We recommend capacity of at least 16GB.
  • It is highly recommended to equip your Raspberry Pi with a fan and run the fan at full speed.

Steps

  1. First, you need to Prepare an SD Card with a suitable image.

  2. Next, assemble and switch on your Raspberry Pi. If you are not familiar with the Raspberry Pi, see our guide on Assembling a Raspberry Pi.

  3. Last of all, you need to Configure and bootstrap the Great Bear node.

1.2.1.2.1 - Prepare an SD Card

Prerequisites

To create a Great Bear node from a Raspberry Pi, you will need:

  • The Raspberry Pi Imager application.
  • Access to the Internet either using Wi-Fi or a wired Ethernet connection.
  • Access to the Internet needs to be via IPv4.
  • A microSD card. We recommend capacity of at least 16GB.

Follow these steps to prepare an SD card for use with Great Bear.

Steps

  1. Obtain an Ubuntu 22.04 Server image built specifically for Raspberry Pi from Ubuntu
    • Ensure you select the server version, currently Ubuntu Server 22.04.1 LTS.
  2. Write the image to an SD card with the Raspberry Pi Imager.
  3. Insert the SD card into the Raspberry Pi and switch it on. If you are not familiar with the Raspberry Pi, see our guide on Assembling a Raspberry Pi.
  4. Next you need to Configure and bootstrap the Great Bear node.

1.2.1.2.2 - Assembling a Raspberry Pi

This section helps you with the physical assembly of a Raspberry Pi.

Prerequisites

  • SD card written with an image as described in Prepare an SD Card.
  • A HDMI monitor or TV as a display for the Pi, and a micro HDMI - HDMI cable to connect them.
  • A standard Ethernet cable for internet access. Alternatively, you can use the built-in wireless LAN of the Raspberry Pi.
  • A USB keyboard.
  • A 5V USB-C power supply to power your Raspberry Pi.
  • It is highly recommended to equip your Raspberry Pi with a fan and run the fan at full speed.

Steps

Connect the cables to your Raspberry Pi.

  1. Insert the SD card into the SD card slot on the Raspberry Pi. It only fits one way.
  2. Plug your USB keyboard into an USB slot on the Raspberry Pi.
  3. Connect the HDMI cable to the Raspberry Pi and your monitor or TV.
  4. Turn on the monitor or TV, and select the correct HDMI input.
  5. Connect the Raspberry to the Internet.
    • Recommended Connect an Ethernet cable into the Ethernet port next to the USB ports. Make sure that the other end of the cable is connected to a router or a switch.
    • Otherwise, you will have to set up a wireless connection later.
  6. When you have plugged in all the required cables and SD card, plug in the micro USB power supply. It automatically turns on and boots.

Raspberry connector overview Raspberry connector overview

(Image source: https://en.wikipedia.org/wiki/File:Raspberry_Pi_4_Model_B_-_Side.jpg)

1.2.1.2.3 - Configure and bootstrap

Prerequisites

  • The node must be installed and able to access the Internet.
  • You need administrator-level command-line access to the node (for example, via SSH or a local console).
  • You must have an access token and device key (received from Cisco) that the node can use to access the Great Bear control plane.
  • The node must be able to access the Internet on TCP port 443.

  • If creating sites with multiple nodes, each hostname must be unique.
  • If creating sites with multiple nodes, the nodes must be able to reach each other via the default route.
    • You can check this with the following steps -
      • Run the command ip route get 1.1.1.1 | grep -oP 'src \K\S+' on each node and make note of the IP address
      • Use the ping command to test connectivity between each of the nodes using the addresses found

If you need help with setting up the Raspberry Pi, see Assembling a Raspberry Pi.

Steps

  1. If you haven’t already done it, write the Ubuntu Server image to an SD card with the Raspberry Pi Imager, then boot the SD card in the Raspberry Pi.

  2. Login to the Raspberry using ubuntu as the username and the password. The system will ask you to create a new password.

    Note: The keyboard layout is ‘QWERTY’ by default.

  3. If the Raspberry connects to the Internet using a wired Ethernet connection, retrieve its IP address (for example, run ip a show dev eth0) so that you can ssh from another device to your RPi (ssh ubuntu@<raspberry-ip-address>).

    • Otherwise, see this tutorial for instructions to connect to Wi-Fi.
  4. Change the hostname from ‘ubuntu’ to a unique name.

    Run sudo nano /etc/hostname, change the name and save your changes.

    Note: Changing the hostname takes effect only after rebooting the Raspberry Pi in the next step.

  5. Upgrade the operating system of the Raspberry Pi:

    1. Run the following commands. Make sure that you accept the upgrade of the kernel. sudo apt update; sudo apt upgrade
    2. Install the extra-raspi package: sudo apt-get install linux-modules-extra-raspi
    3. Reboot the system: sudo reboot
  6. You will need to get your Hardware Identifier to onboard your node on the Great Bear dashboard. By default, the Hardware Identifier of the node is the MAC address of the first network interface. In order to find it, you can run the command ip a.

    Ifconfig command To use a different identifier, set the HARDWARE_ID environmental variable, for example: export HARDWARE_ID="xx:xx:xx:xx:xx:xx"

    Note: The HARDWARE_ID must be:

    • Unique within Great Bear, as this ID also identifies the node on the dashboard.
    • Between 8 and 128 characters long.
    • Include only ASCII letters (A-Za-z), numbers (0-9), and the :-_ characters.

    Using the MAC address, the serial number of the CPU, or a similar Universally Unique Identifier (UUID) is a convenient way to make sure the ID has not already been used within Great Bear.

  7. Bootstrap the node for Great Bear. Run the following command:

    curl https://get.greatbear.io/pub/get.sh | \
    TOKEN=<access-token> DEVICE_KEY=<device-key> sh
    

    Note: You can bootstrap the node with an agent showing more verbose logs by adding the LOG_LEVEL=DEBUG option.

  8. That’s it! Next you can Start using your node.

Troubleshooting

In case you encounter problems, see our Troubleshooting guide.

Powered down nodes or disconnected nodes

If you power off the node, or it loses the Internet connection, the Great Bear dashboard will show that the node is offline. If you power on the node, or the network connection is restored, it becomes online again.

1.2.1.3 - Getting started with a virtual machine

To create a Great Bear node in a virtual machine, see the following guides.

1.2.1.3.1 - Install a node in VirtualBox

You can create virtual Great Bear nodes in VirtualBox. For testing purposes, you can even run nodes on your laptop. To prepare a virtual machine in VirtualBox to use as a Great Bear node, complete the following steps.

Prerequisites

  • Make sure that your system supports VirtualBox. At the time of writing the systems we have validated are Apple Mac x86 and Windows x86.
  • You have downloaded and installed VirtualBox.
  • You have downloaded the ISO installation image of Ubuntu Server 22.04
  • If creating sites with multiple nodes, each hostname must be unique.
  • The node must be installed and able to access the Great Bear control plane on the network level.
  • The node must be able to access the Internet on TCP port 443.

  • You need administrator-level command-line access to the node (for example, via SSH or a local console).
  • You must have an access token and device key (received from Cisco) that the node can use to access the Great Bear control plane.
  • If creating sites with multiple nodes, the nodes must be able to reach each other via the default route.
    • You can check this with the following steps -
      • Run the command ip route get 1.1.1.1 | grep -oP 'src \K\S+' on each node and make note of the IP address
      • Use the ping command to test connectivity between each of the nodes using the addresses found

Create a new VM

  1. Open VirtualBox and create a new virtual machine. The virtual machine must have at least:
    • 2 GB of memory
    • 8 GB of virtual hard disk
  2. Configure network access for the virtual machine. The virtual machine must have Internet access to be able to access the Great Bear control plane. (NAT or bridged access is recommended.)
  3. Launch the virtual machine and install the operating system.

Bootstrap the node

  1. Open a terminal on the node.

  2. (Optional) By default, the Hardware Identifier of the node is the MAC address of the first network interface. You can also type ifconfig or ip a to find it. To use a different identifier, set the HARDWARE_ID environmental variable, for example: export HARDWARE_ID="xx:xx:xx:xx:xx:xx"

    Note: The HARDWARE_ID must be:

    • Unique within Great Bear, as this ID also identifies the node on the dashboard.
    • Between 8 and 128 characters long.
    • Include only ASCII letters (A-Za-z), numbers (0-9), and the :-_ characters.

    Using the MAC address, the serial number of the CPU, or a similar Universally Unique Identifier (UUID) is a convenient way to make sure the ID has not already been used within Great Bear.

  3. Run the following command to bootstrap the node:

    curl https://get.greatbear.io/pub/get.sh | \
    TOKEN=<access-token> DEVICE_KEY=<device-key> sh
    

    Note: You can bootstrap the node with an agent showing more verbose logs by adding the LOG_LEVEL=DEBUG option.

  4. That’s it! Next you can Start using your node.

Troubleshooting

In case you encounter problems, see our Troubleshooting guide.

Stop or delete the VM

If you stop the VM, the Great Bear dashboard will show that the node is offline. You can start the node again to bring it back online later if needed.

If you delete the VM and create a new one you will have to register the new node on the Great Bear dashboard.

1.2.1.3.2 - Install a node in Vagrant

You can create virtual Great Bear nodes in VirtualBox using Vagrant. To prepare a virtual machine to use as a Great Bear node using Vagrant, complete the following steps.

Prerequisites

  • Make sure that your system supports Vagrant and the associated virtualization provider (in this case, VirtualBox). At the time of writing the systems we have validated are Apple Mac x86 and Windows x86.
  • You have downloaded and installed VirtualBox.
  • You have downloaded and installed Vagrant.
  • The node must be able to access the Internet on TCP port 443.

Create and bootstrap a new VM

  1. Create a new directory and create a file called Vagrantfile into the directory with the following contents:

    # -*- mode: ruby -*-
    # vi: set ft=ruby :
    
    Vagrant.configure("2") do |config|
    
        config.vm.define "node-0" do |leader|
          leader.vm.box = "ubuntu/jammy64"
          leader.vm.hostname = "node-0"
          leader.vm.network "private_network", ip: "192.168.56.10"
          # Uncomment the next line for enabling VNC server with the EDD application on VM node-0
          #leader.vm.network "forwarded_port", guest: 5900, host: 5901
        end
      
        config.vm.define "node-1" do |worker1|
          worker1.vm.box = "ubuntu/jammy64"
          worker1.vm.hostname = "node-1"
          worker1.vm.network "private_network", ip: "192.168.56.11"
          # Uncomment the next line for enabling VNC server with the EDD application on VM node-1
          #worker1.vm.network "forwarded_port", guest: 5900, host: 5902
        end
      
        config.vm.define "node-2" do |worker2|
          worker2.vm.box = "ubuntu/jammy64"
          worker2.vm.hostname = "node-2"
          worker2.vm.network "private_network", ip: "192.168.56.12"
          # Uncomment the next line for enabling VNC server with the EDD application on VM node-2
          #worker2.vm.network "forwarded_port", guest: 5900, host: 5903
        end
      
        config.vm.provider "virtualbox" do |vb|
           vb.memory = "2048"
        end
      
        config.vm.provision "shell", inline: <<-SHELL
           apt-get update 1>/dev/null
        SHELL
      
      end
    

The Vagrantfile defines three VMs, named node-0, node-1 and node-2.

  1. Start a virtual machine by running vagrant up [machine name] - if you omit machine name, vagrant will bring up all three VMs

    Note: If you receive an NS_ERROR_FAILURE error when starting the VM, check the security permissions of VirtualBox.

Bootstrap the node

Steps

  1. Select the node you want to bootstrap. For example, for node-0 type vagrant ssh node-0

  2. Set the HARDWARE_ID environmental variable. This value will be used to register the node in the dashboard. For example: export HARDWARE_ID="xx:xx:xx:xx:xx:xx"

    Note: The HARDWARE_ID must be:

    • Unique within Great Bear, as this ID also identifies the node on the dashboard.
    • Between 8 and 128 characters long.
    • Include only ASCII letters (A-Za-z), numbers (0-9), and the :-_ characters.

    Using the MAC address, the serial number of the CPU, or a similar Universally Unique Identifier (UUID) is a convenient way to make sure the ID has not already been used within Great Bear.

  3. Run the following command to bootstrap the node:

    curl https://get.greatbear.io/pub/get.sh | \
    TOKEN=<access-token> DEVICE_KEY=<device-key> sh
    

    Note: You can bootstrap the node with an agent showing more verbose logs by adding the LOG_LEVEL=DEBUG option.

  4. That’s it! Next you can Start using your node.

Troubleshooting

In case you encounter problems, see our Troubleshooting guide.

Stop or delete the VM

If you stop the VM, the Great Bear dashboard will show that the node is offline. You can start the node again to bring it back online later if needed.

If you delete the VM, the node will no longer come back online when restarted, and you will have to register it again on the Great Bear dashboard.

  • To stop the VM, run vagrant down
  • To restart the VM, run vagrant up
  • To remove the VM entirely, run vagrant destroy

1.2.2 - Start using your node

To create a Great Bear site and start using your node complete the following steps.

Steps

  1. First, log in to the Great Bear dashboard, as described in Log in to dashboard.
  2. Next, you need to tell Great Bear about your newly created node - to do this, follow the steps described in Register new node.
  3. Next, you need to create a Site - to do this, follow the steps described in Create new site.
  4. Next, you need to add the Great Bear node to the new Site - to do this, follow the steps described in Add node to site.
  5. Now the Great Bear node is assigned to a Site, you’re ready to deploy an Application! To do this, follow the steps described in Deploy new application.

Next

If you run into problems with any of these steps, or simply want to know more about managing nodes, sites and applications in Great Bear, check out the full documentation in Administering Great Bear.

1.3 - Administering Great Bear

The Great Bear dashboard is a Software-as-a-Service (SaaS) solution that allows you to manage your Great Bear nodes, sites, and applications. The dashboard is managed and maintained by Cisco.

You can access the dashboard at https://dashboard.greatbear.io

Log into the dashboard.

  • Sites are the collections of nodes grouped together for administrative purposes. For example, the nodes located at a particular facility can be grouped into a site.
  • Applications are the actual applications deployed on the nodes.
  • Nodes are the physical or virtual devices deployed at your facilities to perform edge computing tasks.

1.3.1 - Log in to dashboard

First time using the dashboard

To sign up to Great Bear for the first time you will need to have received an invitation link.

Invite link
  1. Click on the link, you will be redirected to a sign up form.

  2. Enter your full name and a strong password to register to Great Bear.

    Sign up
  3. Once done, you will be redirected to the Great Bear dashboard for your tenant.

    Dashboard view

Log in to the dashboard

You can access the dashboard at https://dashboard.greatbear.io.

  1. Click on Sign in. You will be redirected to a Log In form.

    Log in
  2. Log in with the credentials used during registration. Alternatively, you can log in with any of the social platforms proposed.

  3. Once done, you will be redirected to the Great Bear dashboard for your tenant.

    Dashboard view

1.3.2 - Managing Sites

A site is a logical grouping of nodes. For example, you can create sites based on the physical location of your nodes, and assign every node that is located at one of your facilities to a single site. Currently, every node of a site must belong to the same cluster. Using sites is important in Great Bear, because you cannot deploy applications directly to nodes, only to sites.

Sites overview

To display the list of your sites, open the Great Bear dashboard, then select Sites.

Sites list Sites list

For every site, the dashboard shows the following information:

  • Status: Status of the site.
  • Site name: A descriptive name of the site. Select the site to display detailed information and to configure the site.
  • Location: Location of the site, for example, London, UK. To change the location of the site, see Configure site.
  • Description: Description of the site (if any). To change the description of the site, see Configure site. This column is hidden by default.
  • Tags: The tags (labels) assigned to the site.
  • Assigned Nodes: List of devices (nodes) assigned to the site, including the name and status of each device.
  • Deployed Apps: List of applications deployed to the site, including the name and status of the application. Select an application to display more information about the deployment, and to manage the deployment.

Select a site to display this information in a sidebar.

You can customize the data displayed in the table by showing or hiding columns. Click on the settings icon in the header and select Show columns

Configure site

To find specific sites, you can:

  • Use the column headers to sort the table
  • Use the Search field to display matching sites. You can search every data about the sites, including metadata, tags, and names of deployed applications.

To display your sites on a map, select the button .

Sites on the map Sites on the map

Hovering over a site shows the description and status of the site. Select the site to display detailed information and to configure the site.

Status of a site on the map Status of a site on the map

Site status

The status of the site is one of the following:

Icon Message Description Code
No bootstrapped nodes in site The site has been created but does not have any bootstrapped node assigned to it. Register new nodes if needed, and assign them to the site NO_STATUS
Running All nodes of the site are running. You can deploy applications on this site RUNNING
Nodes are initializing An operation is currently being performed on one or more nodes. Wait until it is finished IN_PROGRESS
Serious problem with nodes All nodes of the site are in ERROR or FAILED state. None of the deployed applications are working ERROR
Lost contact with nodes Check the network accessibility of the site, and the network configuration of its nodes DORMANT

1.3.2.1 - Create new site

To create a new site, complete the following steps.

  1. Open the Great Bear dashboard, then select Sites.

  2. Select Create Site in the top-right section of the dashboard. Create Site Create Site

  3. Set the NAME and the DESCRIPTION of the site to suit your environment.

  4. Set the LOCATION of the site. Some suggestions are shown and based on the location you choose, the LATITUDE and LONGITUDE coordinates are added automatically. You can also provide a custom location name and/or set specific latitude and longitude coordinates.

  5. Enable Collect site metrics to collect performance and troubleshooting data about the site. For details on accessing the collected metrics, see Observing Great Bear.

  6. Select Apply to save your changes.

  7. After the site is created, you can add nodes to the site. You can deploy applications to the site only after you have added nodes to site.

1.3.2.2 - Configure site

To change the metadata of an existing site, complete the following steps.

  1. Open the Great Bear dashboard, then select Sites.

  2. Select the site you want to modify. Alternatively, you can select the site in the map view.

  3. Select Edit Config.

    Configure site
  4. Set the NAME and the DESCRIPTION of the site to suit your environment.

    Configure site
  5. If appropriate, add relevant TAGS to the site.

  6. Set the LOCATION of the site. Some suggestions are shown and based on the location you choose, the LATITUDE and LONGITUDE coordinates are added automatically. You can also provide a custom location name and/or set specific latitude and longitude coordinates.

  7. Enable Collect site metrics to collect performance and troubleshooting data about the site. For details on accessing the collected metrics, see Observing Great Bear.

  8. Select Apply to save your changes.

1.3.2.3 - Delete site

To delete a site from Great Bear, complete the following steps.

CAUTION:

Deleting a site cannot be undone. Every application deployed to the site will be deleted.
  1. Open the Great Bear dashboard, then select Sites.
  2. Select the site you want to delete. Alternatively, you can select the site in the map view.
  3. Select Delete Site.
  4. To delete the site, select Confirm.
  5. After the site is deleted, the nodes that belonged to the site become unassigned, and you can add them to another site.

1.3.3 - Managing Nodes

A node (sometimes called AppNode) is a physical or virtual machine that can run applications. Technically, the node is a member of a Kubernetes cluster.

Nodes overview

To display the nodes registered into your Great Bear deployment, open the Great Bear dashboard, then select Nodes.

Nodes list Nodes list

For every node, the dashboard shows the following information:

  • Status: Status of the node, for example, Ready or Node not online.
  • Node name: A descriptive name of the node. To change the name of the node, see Configure node.
  • Hardware ID: Hardware identifier. This is a unique identifier of the node.
  • Description: Description of the node (if any). To change the description of the node, see Configure node. This column is hidden by default.
  • Assigned to site: The site the node belongs to. Click on the site name to show information about the selected site in a sidebar. For free nodes that are not assigned to any site, Not assigned is shown. To assign the node to a site, select Assign to site and see Add node to site.

Select a node to display this information in a sidebar.

To register a new node in Great Bear, see Register new node.

You can customize the data displayed in the table by showing or hiding columns. Click on the settings icon in the header and select Show columns

Configure site

To find specific node, you can:

  • Use the column headers to sort the table
  • Use the Search field to display matching nodes. You can search every data about the node, including metadata and tags.

Node lifecycle

In general, the lifecycle of a node consists of the following steps.

  1. Physically install the node.
  2. Register the node in Great Bear.
  3. Assign the node to a site. Now the node is ready to run workloads.
  4. Deploy applications to the site of the node. Great Bear will assign workload to the node as needed.
  5. (Optional) If you do not use the node anymore (for example, because of hardware failure), you can delete the node from Great Bear.

Node status

The status of the node is one of the following:

Icon Message Description Code
Node not bootstrapped The node has been registered in the dashboard but not yet bootstrapped. Bootstrap the node NO_STATUS
Node is initializing The node is in the process of being registered and bootstrapped into Great Bear. Wait until it is finished IN_PROGRESS
Node is not assigned to a site The node is registered in Great Bear but not assigned to any site yet. Assign the node to a site CLAIMED
Successfully started The node is operational and ready to receive application workload RUNNING
Node is in an error state An error occurred and the node is not functioning properly. See Node Errors. Reboot the node. If the error persists, check the logs of the node for details, or reinstall the node ERROR
Node offline The node is powered off or is not accessible for some other reason, for example, because of a network error. Check the network accessibility and network configuration of the node OFFLINE

1.3.3.1 - Register new node

To register a new node to Great Bear, complete the following steps.

Prerequisites

  • You will need the HARDWARE IDENTIFIER of the new node to register it.
  • The physical installation of the node is not covered in this procedure.

Steps

  1. Open the Great Bear dashboard, then select Nodes.

  2. Select Register Node in the top-right section of the dashboard. Register Node Register Node

  3. Enter the HARDWARE IDENTIFIER of the new node. Check that the identifier is correct, you can’t change it later unless you delete and re-register the node, see Delete node.

  4. Enter a name and a description for the node. You can change these values later if needed (for details, see Configure node).

  5. Select Apply.

    The new node is added to the list of nodes. It’s Site is automatically set to not assigned. Now you can Add node to site.

1.3.3.2 - Add node to site

If you have nodes registered in Great Bear that are not assigned to any site, you must add them to a site before Great Bear starts using them. To add a node to a site, complete the following steps.

Prerequisites

Steps

  1. Open the Great Bear dashboard, then select Sites.

  2. Select the site you want to add the node to in the list of sites, then select Manage Nodes. The nodes assigned to the site and the free nodes (node that do not belong to any of the sites) are displayed. Manage nodes Manage nodes

  3. Select the free nodes you want to add to the site. (If you have nodes that are not registered in Great Bear yet, select New Node to register a node).

    Note: The nodes of a site must be able to reach each other via the default route.

  4. Select Apply.

  5. After a short time, the new nodes appear in the Assigned Nodes list of the site.

1.3.3.3 - Configure node

To change the metadata of a node, complete the following steps.

  1. Open the Great Bear dashboard, then select Nodes.

  2. Select the node you want to modify.

    Configure site
  3. Select Edit Config.

  4. Set the NAME and the DESCRIPTION of the node to suit your environment. For example, for a camera node, you can include the physical location of the camera.

  5. Select Apply to save your changes.

1.3.3.4 - Reassign node

To remove a node from a site (so you can assign it to another site), complete the following steps. If you want to delete a node from Great Bear, see Delete node.

  1. Open the Great Bear dashboard, then select Nodes.

  2. Select the node you want to modify.

    Configure site
  3. Select Manage Assignments.

  4. Find the site the node currently is assigned to, and clear the checkbox.

    • If you want to assign the node to another site, select the new site, then select Apply.
    • If you do not want to assign the node to another site, select Apply.

1.3.3.5 - Delete node

To delete a node from Great Bear, complete the following steps. If you want to remove a node from a site (so you can assign it to another site), see Reassign node.

CAUTION:

Deleting a node has consequences and can impact the performance of your applications that are running on the site.
  1. Open the Great Bear dashboard, then select Nodes.

  2. Select the node you want to modify.

    Configure site
  3. Select Delete Node.

  4. To remove the node from Great Bear, select Confirm.

1.3.4 - Managing Application Deployments

The Great Bear dashboard allows you to manage your applications:

The applications range from simple applications that use only the resources of a single node to complex, resource-heavy applications that run on multiple nodes of the site. However, Great Bear takes care of the resource management of the site, as long as there are enough resources to run the deployed applications.

App status

The status of a deployed app is one of the following:

Icon Message Description Code
No status received The application deployment intent has been registered, but no status is yet received. Wait for the application to be fully deployed and operational. NO_STATUS
Running Application deployment is properly running. RUNNING
Application deployment is initializing An operation is currently being performed on the application deployment. Wait until it’s finished. IN_PROGRESS
Application deployment reports problems Application deployment reports problems. Check its configuration and redeploy the app. ERROR
Application deployment failed Application wasn’t deployed. Check its configuration and redeploy the app. FAILED
Lost contact with application deployment Check the network configuration of the app. DORMANT

1.3.4.1 - Deploy new application

To deploy a new application to one or more of your sites, complete the following steps.

  1. Open the Great Bear dashboard, then select Application Store. Application Store Application Store

  2. Find the application you want to deploy in the Application Store (for example, Edge MQTT), then select Details.

  3. Check the description and the Site requirements of the application: you can deploy the application only on sites that meet its requirements.

  4. If there are multiple versions available of the application, select the version you want to deploy. Usually, it is recommended to deploy the latest version.

  5. Select Deploy.

  6. Select the sites where you want to deploy the application, then select Params. If you have multiple sites, you can select to deploy the application to All sites, or to specific countries.

  7. Set the application-specific parameters of the deployment. For details, see the documentation of the application. For the documentation of the reference applications, see Usecases.

  8. Select Deploy.

    Great Bear deploys the application on the site, and adds the application to the list of Deployed Apps on the site.

Note: If you want to deploy the latest version of an application on a single site, you can select the site on the Great Bear UI, then select Deploy App from the sidebar. Site Sidebar

1.3.4.2 - Redeploy application

Redeploying an application stops every instance of the application that is running on a site, deletes the application from the site, then deploys the application using its current configuration. To redeploy an application on a site, complete the following steps.

  1. Open the Great Bear dashboard, then select Sites.

  2. Find the site where you want to redeploy the application, then select the name of the application.

    App details page

  3. Select Redeploy. If the version of the application that was originally deployed to the site is not available anymore (for example, because it has been revoked), a warning message is displayed.

    CAUTION:

    Hazard of data loss! Any data that the application stores only locally in the memory of the node is irrecoverably lost.

  4. If the application has any parameters that can be configured for the deployment, set the parameters as needed. For details, see the documentation of the application you are deploying. For the documentation of the reference applications, see Usecases.

  5. Select Apply.

1.3.4.3 - Delete application

To delete an application from a site, complete the following steps.

CAUTION:

Hazard of data loss! Any data that the application stores only locally in the memory of the node is irrecoverably lost.
  1. Open the Great Bear dashboard, then select Sites.

  2. Find the site where you want to delete the application from, then select the name of the application.

    App details page

  3. Select Delete.

1.4 - Observing Great Bear

To help you with the maintenance and troubleshooting of your sites, edge devices, and applications, Great Bear provides a centralized observability solution and collect the metrics from your edge nodes. You can access and query the collected data from your Grafana instance, or your Grafana Cloud account.

Enable metric collection

Collecting metrics must be enabled for your site. On the Great Bear dashboard, select your site, then select Edit Config and verify that the Collect site metrics option is enabled. For details, see Configure site.

Collect site metrics option enabled

Enable application metrics

If an application supports collecting metrics, you can enable collecting application metrics during the deployment or redeployment of the application by selecting the Collect application metrics or Enable monitoring option. For details, see the documentation of the specific application.

Collected metrics

The collected metrics include CPU and memory utilization, and network usage.

Applications can have their own custom dashboards that show application-specific metrics. For example, the Edge MQTT broker has a dashboard that shows the number of published messages, the amount of transferred data, and the number of clients connected.

A sample dashboard for application metrics A sample dashboard for application metrics

Connect Great Bear to Grafana

To access the site and application metrics of Great Bear, add Great Bear as a data source to your Grafana instance, or your Grafana Cloud account.

Prerequisites

  • A running Grafana instance or a Grafana Cloud account. When using your own Grafana instance, make sure that it can access the https://api.greatbear.io/api/v1/metrics URL on port 443.

  • You must have the organization admin role to add data sources.

  • Your Great Bear API key

    To access the API key, login to the Great Bear Dashboard, select the top right Profile icon, then click Copy API Key.

    The Profile menu

Steps

Add Great Bear as a data source to Grafana. Complete the following steps.

  1. Open Grafana.
  2. Select the cog icon, then select Data Sources > Add data source.
  3. Select Prometheus.
  4. Set URL to https://api.greatbear.io/api/v1/metrics.
  5. In the Custom HTTP headers section, add the x-id-api-key header.
  6. Set the value of the x-id-api-key header to your Great Bear API key.
  7. Select Save and Test.

For details on adding data sources to Grafana, see the official Grafana documentation.

Import the Great Bear dashboards

To import the Great Bear dashboards into Grafana, complete the following steps.

  1. Download the Great Bear dashboards from https://files.greatbear.io/dist/dashboards/latest/dashboards.tgz and extract the file.

    When prompted for credentials:

    username: prod
    password: <access-token as stated in your tenant on-boarding pack>
    

    If you’re unsure about the access token, contact the Great Bear administrator of your organization, or the Great Bear Support Team.

  2. Open Grafana.

  3. Select Dashboards > Import Dashboard.

  4. Select Upload JSON and upload a dashboard file. Repeat this step to upload the other dashboard files.

Great Bear dashboards

If you have imported the Great Bear dashboards into Grafana, you can access them in Grafana. You can find the available dashboards in the Great Bear dashboards folder, grouped into subfolders:

  • Applications: Dashboards for the deployed applications that have metrics collection enabled. For details, see Enable application metrics.
  • Kubernetes: Dashboards of the Kubernetes resources of your sites, for example, the clusters, nodes, and pods.
  • General: Dashboards specific to Great Bear.
  1. Select a dashboard to display the related metrics.

    Metrics on a dashboard Metrics on a dashboard

  2. Select the site you are interested in from the siteId field on the top of the dashboard.

    Note: Only those sites are listed that have submitted metrics during the period set on the top-right of the dashboard. Adjust the period if needed.

    Select a site Select a site

    To learn more about how Grafana dashboards work, see the official Grafana documentation.

1.5 - Usecases

This chapter contains the documentation of the reference applications created by Cisco that you can deploy on your Great Bear sites.

1.5.1 - Edge MQTT

Overview

The Edge MQTT application is able to run in two modes:

  • as a standalone MQTT message broker at the edge, providing a lightweight approach to publish/subscribe messaging between devices and applications, and
  • in bridging enabled, forwarding edge MQTT messages to an external MQTT broker.

Bridging mode brings resiliency to MQTT at the edge, storing messages when the external broker becomes unavailable (for example, because of configuration errors, or site network outage), and automatically transmitting the buffered messages when the broker is accessible again.

1.5.1.1 - Architecture

When the Edge MQTT app is deployed without the MQTTS Bridge parameter defined, then the app runs as a standard MQTT message broker at the edge. Since it uses Mosquitto it is both lightweight and suitable for use on all devices, and provides a lightweight method of carrying out messaging using a publish/subscribe model.

When running with bridging enabled, the Edge MQTT application can receive MQTT messages from local nodes and applications, and forward them to an external MQTT broker. The advantage of using Edge MQTT in this way is that it can prevent message loss in the case of a network outage. When the external MQTT broker becomes unavailable (for example, because of a configuration errors, or the WAN connection of the site is temporarily down), Edge MQTT can store the incoming messages, and automatically transmit them when the broker becomes available.

Edge MQTT overview Edge MQTT overview

1.5.1.2 - Deployment

When you are deploying the Edge MQTT application to a site, you must specify how to access the MQTT broker by setting the following parameters.

  • MQTT port: Port to use for accessing the MQTT broker.
  • MQTT service name: Service name for accessing the MQTT broker.
  • MQTT over Websocket port: Port to use for accessing the MQTT broker over WebSocket protocol.
  • Ports mode: Specifies how the broker service is accessible.
    • node-port: Exposes the Service on each Node’s IP at a static port from range 30000-32768.
    • host-port: Opens a port on the node on which the pod is running.
  • Persistence: If enabled, Edge MQTT buffers the messages to disk (otherwise, the messages are buffered in memory).
  • Enable monitoring: If you have enabled monitoring for the site (see Configure site), you can enable application monitoring to collect metrics about the performance of your deployment. For details on accessing the collected metrics, see Observing Great Bear.
  • MQTTS Bridge hostname:port: The address of the MQTT broker to send the messages to in hostname:port format.
  • Bridge username: The username used to authenticate on the MQTT broker.
  • Bridge token: The authentication token that Edge MQTT uses to access the broker. Make sure that it provides write access.
  • Skip server certificate validation: Do not validate server certificate of MQTTS bridge in order to allow connections to your own MQTT bridge over TLS

If the Edge MQTT app is deployed without the MQTTS Bridge parameter defined, then the app runs as a standard MQTT message broker at the edge.

Edge MQTT parameters Edge MQTT parameters

1.5.2 - Event-Driven Display

The Event-Driven Display (EDD) application allows you to display visual content to a site, for example, displaying video surveillance footage and digital signage applications. EDD provides everything you need to present content from a remote source to one or more connected displays within a site. EDD can react to external events, and display content from on-premises sources (for example, the RTSP stream from a local camera), or from a remote site (like a webpage). This enables you to display a restaurant’s menu on the screens above the counter, show arrival and departure times on an airport, or any other content that’s relevant for your environment.

It’s important to note that EDD does not contain any business logic, it is only responsible for showing the content of the source on the connected displays.

1.5.2.1 - Architecture

Event-Driven Display has three main components:

  • The EDD Agent runs on the nodes connected to the displays on your site, rendering content on the display. The agent displays the content in full screen.
  • The App Sync Server runs locally on an edge device of the site and manages the EDD agents, sending them information about the content to display. Every site where you deploy the Event-Driven Display will have a sync server deployed.
  • The App Control UI is a web service running in the cloud. It gives you an overview of your sites and connected displays, allowing you to remotely set the content that should be shown on each display.

The sync servers periodically fetch the configuration of the connected displays of their site from the App Control UI and distribute this configuration to the agents.

The agent does not require any local user intervention, it starts automatically when the device is powered up, shows a series of splash screens, then connects the local sync server to get the URL of the content to display.

Event-Driven Display overview Event-Driven Display overview

1.5.2.2 - Deployment

Prerequisites

To deploy Event-Driven Display, the site must have at least one display connected to a node. You should also have access to the App Control UI to interact with your Event-Driven Display deployments: you will need the URL of the Control UI of your organization, and your username and password.

Deploying EDD

EDD Parameters EDD Parameters

  1. To deploy the Event-Driven Display application to a site, you need an API key so that you can control the Event-Driven Display App from the App Control UI. To retrieve this key, complete the following steps:

    1. Open and log in to the App Control UI in your browser:

      https://app-control.greatbear.io

    2. Click the Generate API Key button on the side menu, and copy the token that is displayed on the screen.

      Generate API Key in the App Control UI Generate API Key in the App Control UI

  2. Deploy the Event-Driven Display application to a site, and configure EDD:

    1. Paste the API key into the deployment parameters. That way the EDD application deployed at the edge can communicate with the App Control UI. This allows you to control the content displayed on the screen remotely. (For details on using the App Control UI, see this section.)

    2. (Optional) Configure other parameters as needed:

      • Enable Monitoring: Enable monitoring of the Event-Driven Display application.
      • Enable VNC server: Deploy a VNC server alongside EDD so that you can remotely access the screen.
      • Password for VNC server: This is only required when enabling VNC server, and allows a custom password for accessing the screen remotely.

Note: Until you configure the device to display some content, it displays status and diagnostic information on the splash screen.

Splash Screen

The splash screen is the default page for the EDD agent while it waits for content to display, and can also indicate status messages of the agent. In general, there are two states that the splash screen will show: the ready mode and the error mode.

Ready mode: In this mode the EDD agent has successfully connected to the local Sync Server, and is ready to receive content from the App Control UI. When you remove content from the EDD Agent, the agent will display the splash screen again.

Ready Mode Splash Screen Ready Mode Splash Screen

Error mode: The screen below is shown when an error has occurred in setup of the EDD agent or when there is a connection issue with the local Sync Server. If you encounter this page, please try first to redeploy the EDD application to the site. If the error persists then contact the Great Bear administrator of your organization, or the Great Bear Support Team.

Error Mode Splash Screen Error Mode Splash Screen

Displaying Content Using the App Control UI

Once you have successfully deployed EDD to the nodes in your site, you can use the App Control UI to determine what content each nodes displays. The steps to do so are:

  1. Open and log in to the App Control UI in your browser:

    https://app-control.greatbear.io

  2. The ‘EDD Nodes’ section of the App Control UI lists the devices that are available to display content to.

    List of display devices List of display devices

    Only the sites and devices that are currently online are shown. Those that are shutdown or unable to be reached due to network connection issues are not displayed in the App Control UI.

    The following information is shown about each device:

    • Node Name: The name of the device
    • Site Name: The site where the device is located
    • Tags: Any tags associated with the node
    • Current URL: The source of the URL that the device is currently displaying
    • Actions: Buttons to edit or remove content displayed on the device

    To visualize which devices are located at the same site, select Overview to see a tree like view of the available nodes and sites.

  3. Use the App Control UI to change the content that a device is displaying:

    • Select the Edit icon of the device on which you want to show the new content. If you have many devices, you can use the Search field to find them.
    • Into the URL field, enter the source URL. You can use local (on-premises) and public websites that are accessible using HTTP, HTTPS or RTSP. Note that the target device must be able to access the display source on the network level.
    • Click the SEND button to push your changes to the target device. The device will automatically receive the new source URL and will start displaying the desired content.

    Changing the EDD content Changing the EDD content

  4. In order to remove content from a screen, you can select the desired device and click the Remove Content button. The target device will stops displaying the content and show the EDD splash screen.

Both the Bulk Update Content and Bulk Remove Content buttons can be used with more than one target device selected.

1.5.3 - ThousandEyes Enterprise Agent

The ThousandEyes Enterprise Agent provides visibility from within your data centers, cloud VPCs/VNETs and branches. The application runs a variety of layered monitoring tests, in order to gain insight into your network and application performance, and sends this information to the ThousandEyes platform. In addition to active monitoring, Enterprise Agents also offer SNMP-based monitoring, as well as discovery and topological mapping of internal network devices.

ThousandEyes Enterprise Agent - Enterprise Agent Path Visualization ThousandEyes Enterprise Agent - Enterprise Agent Path Visualization

This enables users to correlate visibility across multiple layers from the application, network, device and Internet routing for greater context around application performance. Including:

  1. Network Path and Performance - ThousandEyes Path Visualization maps end-to-end network paths from Enterprise Agents to service endpoints across any network, including SD-WAN overlay/underlay tunnels, and visually presents BGP routes.

  2. Application Performance - Monitor the usability and performance of SaaS and internally hosted apps by measuring page load times of individual web components or by using scripted multistep transactions that replicate user interactions and validate API logic.

  3. Device Health - Poll switches, routers and firewalls using SNMP to collect device health data, such as memory, CPU and performance impacting interface metrics.

1.5.3.1 - Architecture

The ThousandEyes platform uses agents to run tests against targets configured for measurement. An agent is a Linux server running custom ThousandEyes software, which checks in with an agent collector to obtain instructions from the ThousandEyes platform. Generally speaking, these are lightweight machines that are tasked solely with acting as ThousandEyes agents.

The Enterprise Agent is an endpoint that is used to test targets from inside your network, or from infrastructure within your control. Enterprise Agents collect data for the exclusive use of your account, and are not shared with other organizations.

Thousand Eyes Architecture Thousand Eyes Architecture

The data collected by the Enterprise Agents enables users of the ThousandEyes platform to:

  • Gain end-to-end visibility across devices, LAN, service providers and service components, such as DNS, DHCP and cloud/SaaS networks.
  • Monitor availability and performance of business-critical SaaS applications, such as Microsoft 365, Salesforce, Webex and more.
  • Monitor performance of on-ramp cloud connectivity between your data center and VPC, as well as inter-region and inter-availability zone (AZ) cloud network.
  • Perform a pre-migration readiness audit, ISP evaluation and application performance benchmarking, and gain visibility into the overlay and underlay network paths and performance.

1.5.3.2 - Deployment

Prerequisites

Note: The ThousandEyes Enterprise Agent currently can not be deployed to Raspberry Pi devices through Great Bear - once the ThousandEyes team releases an ARM compatible container image it should become possible for the Great Bear team to also support this. For details on how to deploy the ThousandEyes Enterprise Agent to Raspberry Pi devices outside of Great Bear, see the ThousandEyes documentation.

You must have an active ThousandEyes account and access to the ThousandEyes platform to deploy this application.

The ThousandEyes Enterprise Agent has several key hardware requirements that must be satisfied for the application to successfully install on the node and operate properly. In particular, the node will require:

  • CPU Count: 2
  • RAM: 2 GB
  • Hard Disk Space: 20 GB
  • Active network interface and internet connection

ThousandEyes Token

Before you can deploy the ThousandEyes Enterprise Agent application to a site, you must obtain a token from the ThousandEyes platform to link the agent to your account. For full instructions on how find the token, see the ThousandEyes documentation.

As a summary, first open the platform page ‘Cloud & Enterprise Agents - Agent Settings’, and click Add New Enterprise Agent.

ThousandEyes Enterprise Agent - Add New Enterprise Agent ThousandEyes Enterprise Agent - Add New Enterprise Agent

Then find and copy the token by clicking Copy next to the Account Group Token field. You can now use this token in the deployment steps below.

ThousandEyes Enterprise Agent - Find Account Group Token ThousandEyes Enterprise Agent - Find Account Group Token

Deploying

When you are deploying the ThousandEyes Enterprise Agent application to a site, you must specify the following parameters.

  • TOKEN: The authentication token to use for accessing the ThousandEyes platform. For details on getting a token, see the ThousandEyes Token section or the ThousandEyes documentation.
  • TEAGENT_INET option: IPv4/IPv6 connectivity: Specifies whether to access the ThousandEyes platform using IPv4 or IPv6. Valid values are: 4 and 6

ThousandEyes Enterprise Agent parameters ThousandEyes Enterprise Agent parameters

1.5.4 - Wedge

The Wedge App provides the ability to run AI workloads at the edge. The user can configure these workloads remotely by using the App Control UI. For example, the user can specify the video input for the app, the model which should be ran, where inferences should be written to, as well as set various pre- and postprocessing parameters.

Moreover, edge devices typically have limited resources in terms of available memory and CPU, which can create significant performance issues for AI applications. In particular, models may not fit on a single device, or their performance can fall below target levels. Therefore the Wedge app provides the ability to run distributed AI workloads at the Edge - splitting AI models and deploying them across multiple devices. Based on node parameters and network topology, the Wedge AI SDK can profile the model and determine the optimal way to split it between the devices in order to improve performance.

For example, if you are running an AI model on a camera, then the size of the model is limited based on the available hardware memory, and the inference speed performance of the model (in terms of FPS) can become slow with more complex models. With Wedge, you can split the model and run a part of it on the camera, and other parts of it on other edge nodes. That way you can run bigger, better models with higher frame rates, without having to change the camera hardware.

1.5.4.1 - Architecture

Wedge has three main components:

  • The Wedge App provides the ability to run AI workloads at the edge, as determined by the user through the App Control UI. By using the Wedge AI SDK, you can split AI models based on node parameters and network layout, and then use the App Control UI deploy the split models across multiple devices.
  • The App Sync Server runs locally on an edge device of the site and manages the nodes running the Wedge App, sending them information such as which AI model to run. Every site where you deploy the Wedge will have a sync server deployed.
  • The App Control UI is a web service running in the cloud. It gives you an overview of your sites and nodes that are running the Wedge App, allowing you to remotely set the video input for the app, the model which should be ran, where inferences should be written to, and define various pre- and postprocessing parameters.

The sync servers periodically fetch the configuration of the connected Wedge apps of their site from the App Control UI and distribute this configuration to the applications.

The Wedge App does not require any local user intervention, it starts automatically when the device is powered up, and connects to the local sync server to get the desired AI workload parameters.

Wedge overview Wedge overview

1.5.4.2 - Deployment

Prerequisites

To deploy Wedge, you should also have access to the App Control UI to interact with your Wedge deployments: you will need the URL of the Control UI of your organization, and your username and password.

Deploying Wedge

Wedge Parameters Wedge Parameters

  1. To deploy the Wedge application to a site, you need an API key so that you can control the Wedge App from the App Control UI. To retrieve this key, complete the following steps:

    1. Open and log in to the App Control UI in your browser:

      https://app-control.greatbear.io

    2. Click the Generate API Key button on the side menu, and copy the token that is displayed on the screen.

      Generate API Key in the App Control UI Generate API Key in the App Control UI

  2. Deploy the Wedge application to a site, and configure Wedge:

  3. Paste the API key into the deployment parameters. That way the Wedge application deployed at the edge can communicate with the App Control UI. This allows you to control remotely the video input for the app, the model which should be ran, where inferences should be written to, and define various pre- and postprocessing parameters. (For details on using the App Control UI, see this section.)

    1. (Optional) Configure other parameters as needed:

      • Enable Monitoring: Enable monitoring of the Wedge application.

Running AI Models using the App Control UI

Once you have successfully deployed Wedge to the nodes in your site, you can use the App Control UI to configure each node’s desired AI workload parameters. The steps to do so are:

  1. Open and log in to the App Control UI in your browser:

    https://app-control.greatbear.io

  2. The ‘Wedge Nodes’ section of the App Control UI lists the devices that are available to run AI workloads

    List of wedge devices List of wedge devices

    Only the sites and devices that are currently online are shown. Those that are shutdown or unable to be reached due to network connection issues aren’t displayed in the App Control UI.

    The following information is shown about each device:

    • Node Name: The name of the device
    • Site Name: The site where the device is located
    • Input: The URL that the device is currently using as video input
    • Output: The URL to where the device is currently writing inference output
    • Model: The URL of the model that the device is currently running
    • Actions: Buttons to edit or remove content displayed on the device

    To visualize which devices are located at the same site, select Overview to see a tree like view of the available nodes and sites.

  3. Use the App Control UI to change the AI workload parameters that a device is running:

    • Select the Edit icon of the device you want to configure. If you have many devices, you can use the Search field to find them.

    Inference parameters Inference parameters

    • Into the Input field, enter the source URL of the video feed for your model. You can use local (on-premises) and public video sources that are accessible using RTSP. Note that the target device must be able to access the display source at the network level.

    • For the Output field, enter the MQTT URL of the broker where the device should write AI model inference to. Note that the target device must be able to access the broker at the network level.

    If running split models, the Input and Output fields can also be the TCP input/output of another node.

    • Set the source URL of the model you wish the device to run in the Model field. The Wedge will download the model from this URL, or AWS and MinIO S3 bucket.

    • As many AI models require standard preprocessing of images from the video input, you can enable the Preprocess toggle switch, and adjust the parameters as needed for your environment.

    • If the model is an image classifier and you wish to label the inference as a specific class, then you can enable the Postprocess toggle switch and specify the URL of a .txt file of the classes. The Wedge will download the file from this URL, or AWS and MinIO S3 bucket.

    Inference preprocessing and postprocessing parameters Inference preprocessing and postprocessing parameters

    • Finally, click the SEND button to push your changes to the target device. The device will automatically receive the new AI workload parameters, and will download the required files, connect to the input feed, and start running the desired AI model.
  4. In order to remove the model and all other Wedge configuration from a node, select the desired device and click the Remove Content button. The target device will disconnect from the input feed, and stop running the AI model.

Both the Bulk Update Content and Bulk Remove Content buttons can be used with more than one target device selected.

Examples

For an example of how to deploy a whole model with Wedge, see Deploying an AI Model to Great Bear.

1.6 - Creating, Publishing and Deleting Applications

This chapter describes how to develop and package applications that can be deployed to Great Bear sites.

1.6.1 - Overview

What do you need to know

In addition to the development skills required to create the application, you must be familiar with:

Limitations and requirements

For your application to work optimally with Great Bear, note the following points.

  • Your application should be able to accept parameters via environment variables. This is the main method to pass user-specified configuration parameters and other information from Great Bear to the applications.
  • Currently Great Bear does not provide any local storage solutions for the application data.
  • If your application has any external component that is not deployed to edge, it must be developed and managed independently from the application uploaded to the Great Bear Application Store.

App development process overview

The high-level process of developing an application for Great Bear consists of the following steps.

  1. Create the application.
  2. Build a docker image of the application. Make sure to create the image for the architecture of your Great Bear nodes. If you want the application to run on multiple architectures (for example, in generic x86_64 virtual machines and on physical ARM-based hardware), create a multiarch image.
  3. Download and install the Great Bearing Packaging SDK Tool.
  4. Use the Packaging SDK Tool to create your Great Bear Application Package.
  5. Use the Packaging SDK Tool to validate your Great Bear Application Package.
  6. Use the Packaging SDK Tool to publish your Great Bear Application Package to the Great Bear platform.
  7. Deploy your application to your Great Bear sites from the Application Store.

1.6.2 - Great Bear Packaging SDK

An SDK tool for managing Great Bear Application Packages (GBAP)

Installation

Prerequisites

A GBAP contains a root Helm Chart for managing embedded Kubernetes resources used to deploy your application to Great Bear edge sites.

If you intend to include Chart dependencies located in remote repositories, then you will need to install and configure the Helm client tool.

Download the binary

You can download the latest version of the Packaging SDK tool from the following links:

When prompted for credentials:

username: prod
password: <access-token as stated in your tenant on-boarding pack>

If you’re unsure about the access token, contact the Great Bear administrator of your organization, or the Great Bear Support Team.

Download your correlating OS/Architecture archive, unpack and add it to your PATH and you are good to go.

Log in to Great Bear

The Packaging SDK tool operates as a client to the Great Bear API and needs to be configured with your specific tenant API Key.

  1. To access the API key, login to the Great Bear Dashboard, select the top right Profile icon, then click Copy API Key.

    The Profile menu

  2. Use the login command to register your API key:

    gbear login --apiKey=J1iAsrcu6xJFIJ7N0v5gYeichqIsdz8TZUcar-JjqSAYUuJD0L685I2ZFgqWcL4eYZ6tIAgZ2QRWPj0wxVY6lw
    Storing the new API Key...
    Connecting to the Great Bear platform...
    Login Status: Success
    

    The tool stores the API Key together with the Great Bear API URL within a config file in your HOME path. The the specified API Key is then used to verify successful connectivity Great Bear API.

    cat ~/.config/gbear/config.yaml
    # Great Bear API endpoint to use (overridden by $GBEAR_HOST, defaults to https://api.greatbear.io/api)
    host: https://api.greatbear.io/api
    login:
      # apiKey is the API token that you can copy from the dashboard (overridden by $GBEAR_API_KEY)
      apiKey: J1iAsrcu6xJFIJ7N0v5gYeichqIsdz8TZUcar-JjqSAYUuJD0L685I2ZFgqWcL4eYZ6tIAgZ2QRWPj0wxVY6lw
    
  3. You can use the login command at any time to verify a previously registered API key. The following example uses the optional verbose flag to output the Great Bear API URL and API Key:

    gbear login --verbose
    Connecting to the Great Bear platform...
    -- INFO: URL: https://api.greatbear.io/api/v1/applications
    -- INFO: API Key: J1iAsrcu6xJFIJ7N0v5gYeichqIsdz8TZUcar-JjqSAYUuJD0L685I2ZFgqWcL4eYZ6tIAgZ2QRWPj0wxVY6lw
    Login Status: Success
    

Optionally, you can set the API Key using an environment variable, this could be pulled and set from an accessible Vault within your CICD pipeline:

export GBEAR_API_KEY=J1iAsrcu6xJFIJ7N0v5gYeichqIsdz8TZUcar-JjqSAYUuJD0L685I2ZFgqWcL4eYZ6tIAgZ2QRWPj0wxVY6lw

Quick Start User Guide

The user guides to operate the Packaging SDK tool assumes you have a good understanding of the Great Bear Application Package, with this knowledge you can use the tool to:

1.6.3 - Create an Application Package

The packaging SDK tool provides a create command to construct a new Great Bear Application Package (GBAP) tree on the local filesystem. You need to edit the created GBAP to build out the package templates and metadata with the detail required to publish your containerised application to the Great Bear platform.

Usage:
  gbear application create PATH [flags]
  gbear app create PATH [flags]

Flags:
      --helm string   A path to existing Helm chart to add as sub-chart to the created GBAP
  -h, --help          help for create

The command takes an optional PATH (defaults to current working directory) as a destination to create a GBAP file system tree.

How you operate the create command really depends upon your scenario, and common scenarios have been captured in the following supported used cases.

Application used only with Great Bear

I have a containerised application and I want to create a new GBAP which will contain an new Helm Chart exclusively developed and maintained for the GB platform. The embedded GBAP Helm Chart will be modified and version controlled as part of the GBAP tree.

Example create command for this use case:

gbear app create /myPath

Expected output:

Creating a GBAP with required fields....
Enter the chart name:
myAppChart
Enter the chart version [SemVer2 MAJOR.MINOR.PATCH], (leave empty for default: 0.0.1):
1.0.0
Enter the application name:
myApp
Enter the application description:
My App Description
Successfully created a GBAP in /myPath

Resulting GBAP Tree:

/myPath
├── Chart.yaml
├── gbear
│   └── appmetadata.yaml

The tool has created the following GBAP assets:

  • The root Chart.yaml, containing the entered chart name and version.
  • A boiler plate gbear/appmetadata.yaml, containing the entered application name and description.

You can now manually extend the GBAP tree to build out your Helm template to manage Kubernetes resources, and specialise the appmetadata to publish your existing containerised application to the GB Application Store and define GB deployment parameters. For details, see Developing an App for Great Bear.

Application used and version controlled only with Great Bear

I have a containerised application and an existing Helm Chart that I want to convert into a GBAP to be exclusively developed and maintained for the GB platform. The embedded GBAP Helm Chart will be modified and version controlled as part of the GBAP tree.

The existing Helm Chart folder contains:

myPath/
├── Chart.yaml
├── templates
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── service.yaml
└── values.yaml

Example create command for this use case:

gbear app create /myPath

Expected output:

The specified path contains an existing Helm Chart, would you like to wrap this as a GBAP [Y/N]:
Y
Creating a GBAP with required fields....
Enter the application name:
myApp
Enter the application description:
My App Description
Successfully created a GBAP in path /myPath

Resulting GBAP Tree:

myPath
├── Chart.yaml
├── gbear
│   └── appmetadata.yaml
├── templates
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── service.yaml
└── values.yaml

The tool has only created the boiler plate gbear/appmetadata.yaml, containing the entered application name and description. The GBAP root directly contains the existing Helm templates and values.yaml assets to manage Kubernetes deployment.

You can now manually edit the gbear/appmetadata.yaml to set the fields required to render your application in the Great Bear Application Store, together with any Great Bear deployment configuration options to override fields within the Chart values.yaml. For details, see Developing an App for Great Bear.

Application with existing Helm chart imported to Great Bear

I have a containerised application and Helm Chart, I’d like to keep the Helm Chart agnostic to the GB platform, it is already version controlled and maintained for generic Kubernetes deployment of my application. I want to create a GBAP which allows me to pull in my existing agnostic Helm Chart as part of the package to be published to the GB platform.

Example create command for this use case:

gbear app create /pathTo/myGBAP --helm=/pathTo/v101/myAppChart

Expected output:

Creating a GBAP with required fields....
Enter the chart name:
MyChartName
Enter the chart version [SemVer2 MAJOR.MINOR.PATCH], (leave empty for default: 0.0.1):
1.0.0
Enter the application name:
MyApp
Enter the application description:
My App Desc
Successfully created a GBAP in path /pathTo/myGBAP

Resulting GBAP tree:

/pathTo/myGBAP
├── Chart.yaml     # contains dependency of myAppChart @ version 1.0.1
|── charts
|   └── myAppChart # Physical copy of myAppChart tree @ version 1.0.1
|        ├── templates
│        |    ├── _helpers.tpl
│        |    ├── deployment.yaml
│        |    ├── service.yaml
|        └── values.yaml
├── gbear
│   └── appmetadata.yaml

The tool has created the following GBAP assets:

  • The root Chart.yaml, containing the entered chart name and version, together with the dependency declaration for the specified –helm Chart ‘myAppChart’. The root Chart.yaml is exclusive to the GBAP, it will be maintained and version controlled as part of the GBAP.
  • The specified Helm Chart ‘myAppChart’ is copied in as a sub-chart to the root GBAP Chart, allowing the source Helm Chart to be decoupled from the GBAP and maintained separately. You can use the packaging SDK tool to subsequently add updated versions of the sub-chart ‘myAppChart’ to the GBAP at a cadence aligned with your agnostic application development.
  • A boiler plate gbear/appmetadata.yaml, containing the entered application name and description.

You can now manually edit the gbear/appmetadata.yaml to set the fields required to render your application in the Great Bear Application Store, together with any Great Bear deployment configuration options to override fields within the Chart values.yaml. For details, see Developing an App for Great Bear.

1.6.4 - Add an Application Dependency

There are two fundamental approaches to add a Helm dependency to your Great Bear Application Package (GBAP). You can either

Note: If you are adding a dependency to a published application package, you must subsequently follow the Update an Application Package steps.

Add a dependency from a remote repository

If you add a dependency to a GBAP from a remote repository, then during validation the packaging tool will attempt to resolve all dependencies and extract the correlating charts within the GBAP charts sub-folder.

  1. Edit the root GBAP Chart.yaml and declare a new dependency from a remote repository in the dependencies field.

    The following example adds the Logstash chart as a dependency:

    cat pathToMyGBAP/Chart.yaml
    
    apiVersion: v2
    appVersion: "1.0.0"
    description: My application
    name: my-app
    version: 0.0.1
    dependencies:
      - name: "logstash"
        version: "5.1.4"
        repository: "https://charts.bitnami.com/bitnami"
    
  2. Make sure that you have configured the local Helm client to login to the specified remote repository.

    The following example registers a public repo (bitnami), and a private repo that requires credentials.

    helm registry login https://charts.bitnami.com/bitnami
    helm registry login https://myrepo.com/charts -u <myRepoUserName> -p <myRepoPassword>
    

Add a dependency from a local filesystem path

The packaging SDK tool provides an add command, this can be used to add/update a Helm dependency from the local filesystem to an existing GBAP.

Usage:
  gbear application add PATH [flags]
  gbear app add PATH [flags]

Flags:
      --helm string   A path to existing Helm chart to add as sub-chart to the existing GBAP
  -h, --help          help for create

The command takes an optional PATH (defaults to current working directory) as a destination to create a GBAP file system tree.

The add command with the –helm flag supports the following scenarios:

Update an existing GBAP sub-chart dependency

Scenario 1: I have a GBAP with an existing sub-chart dependency, I have a newer version of the correlating sub Chart in it’s local repository and would like to update the GBAP dependency with the new Chart version.
Scenario 2: I have a GBAP with an existing sub-chart dependency, I would like to revert the sub chart dependency to an older version from it’s local repository.

Note: This correlates with the Application with existing Helm chart imported to Great Bear scenario.

Example for scenario 1:

Existing GBAP:

/pathTo/myGBAP
├── Chart.yaml     # contains dependency of myAppChart @ version 1.0.1
|── charts
|   └── myAppChart # Physical copy of myAppChart tree @ version 1.0.1
|        ├── templates
│        |    ├── _helpers.tpl
│        |    ├── deployment.yaml
│        |    ├── service.yaml
|        └── values.yaml
├── gbear
│   └── appmetadata.yaml

Run the tool to add a local repository path to myAppChart @ version 1.0.2

gbear app add /pathTo/myGBAP --helm=./pathTo/v102/myAppChart

Resulting GBAP tree:

/pathTo/myGBAP
├── Chart.yaml     # contains updated dependency of myAppChart @ version 1.0.2
|── charts
|   └── myAppChart # Physical copy of myAppChart tree @ version 1.0.2
|        ├── templates
│        |    ├── _helpers.tpl
│        |    ├── deployment.yaml
│        |    ├── service.yaml
|        └── values.yaml
├── gbear
│   └── appmetadata.yaml

Update an existing GBAP with an additional sub-chart dependency

Scenario 1: I have a GBAP with an existing sub-chart dependency, I would like to add another sub-chart dependency from it’s local repository to the GBAP.
Scenario 2: I have a GBAP without any dependencies, I would like add my first sub-chart dependency from it’s local repository to the GBAP.

Example for scenario 1:

Existing GBAP:

/pathTo/myGBAP
├── Chart.yaml     
|── charts
|   └── myAppChart 
|        ├── templates
│        |    ├── _helpers.tpl
│        |    ├── deployment.yaml
│        |    ├── service.yaml
|        └── values.yaml
├── gbear
│   └── appmetadata.yaml

Run the tool to add a local repository path to anotherAppChart @ version 1.0.1

gbear app add /pathTo/myGBAP --helm=/pathTo/v101/anotherAppChart

Resulting GBAP tree:

/pathTo/myGBAP
├── Chart.yaml     # contains an additional dependency of anotherAppChart @ version 1.0.1
|── charts
|   └── myAppChart 
|   |    ├── templates
│   |    |    ├── _helpers.tpl
│   |    |    ├── deployment.yaml
│   |    |    ├── service.yaml
|   |    └── values.yaml
|   └── anotherAppChart # physical copy of anotherAppChart tree @ version 1.0.1
|        ├── templates
│        |    ├── _helpers.tpl
│        |    ├── deployment.yaml
│        |    ├── service.yaml
|        └── values.yaml
├── gbear
│   └── appmetadata.yaml

1.6.5 - Validate an Application Package

The packaging SDK tool can verify that an existing Great Bear Application Package (GBAP) is well formed, providing a benchmark to gate attempts to publish the GBAP to the Great Bear platform.

Note: The validation command only performs static analysis, it does not perform any checks that require a Kubernetes API server to execute (for example, helm install --dry-run --debug).

Validation Steps

The tool performs the following validation steps.

  1. Parse and validate the embedded GBAP gbear/appmetadata.yaml file.

    The tool performs both syntactical and semantical analysis on the appmetadata.yaml file to ensure the contents can be successfully parsed by the GB platform when the GBAP is published. Passing this validation step guarantees that the published GBAP can be successfully rendered in the Great Bear Application Store UI, and that the specified deployment configuration can be parsed successfully during the Great Bear site deployment procedure.

  2. Validate and lock Helm dependencies.

    • The Great Bear platform expects a GBAP to have all chart dependencies resolved and extracted within the charts subfolder. This ensures that the GBAP is fully encapsulated and removes the need to pull down Charts from remote repositories during the application deployment procedure at the edge. The resolved dependency tree is also used by the SDK tool to ensure the complete set of Kubernetes resource objects within the GBAP can be rendered successfully into correlating manifests.
    • The tool leverages Helm to resolve all dependencies specified in the root GBAP Helm chart. This includes any dependencies on remote repositories that have been directly entered into the root Chart.yaml outside of the SDK tool’s add --helm=XXX command.
    • All remote Helm Chart repositories hosting declared dependencies must be registered within the Helm client on the local host system.
      • Use the helm registry login command to configure authenticated access to each remote repository.
    • On successful dependency resolution, the GBAP will contain a dependency lock file, capturing the version tree used within the context of the complete GBAP validation procedure.
  3. Validate the root GBAP Helm Chart using the Helm linter, by default any detected linter ERROR will cause the GBAP validation to fail. The following optional flags can be used to configure the Helm linter coverage:

    • –helmSubCharts: Instructs the tool to apply the Helm linter to all sub-charts within the GBAP charts folder.
    • –strict: Instructs the tool to fail GBAP validation on any Helm linter WARNING or ERROR.
    • –verbose: Instructs the tool to output INFO level messages returned from the Helm linter.
  4. Validate that the GBAP contains helm templates that declare the Kubernetes resource object labels expected by the Great Bear platform.

    • There are a set of recommended Kubernetes resource labels documented within the Kubernetes domain, see:

      As these are generally accepted as recommended within Kubernetes, existing tools like the helm linter do not verify or mandate that they exist within Helm charts.

    • The Great Bear platform mandates that a subset of these Kubernetes resource labels are defined within a GBAP (otherwise edge deployment fails). The validation command is configured with the following set of required labels and verifies that they are defined against each resource object declared within the embedded helm templates.

      Key Description Example Type Use Case
      app.kubernetes.io/instance A unique name identifying the instance of an application myapp-abcxzy string Used to attribute a deployment status to an app instance.

SDK Validation Command

To validate your GBAP, run the gbear app validate command.

Usage:
  gbear application validate PATH [flags]
  gbear app validate PATH [flags]

Flags:
      --helmSubCharts   Validate Helm Sub Charts
  -h, --help   help for validate
      --strict          Validate fails on warnings
      --verbose         Verbose output

The command takes an optional PATH (defaults to current working directory) as a destination to create a GBAP file system tree. If the command detects any syntax and/or semantic states that will cause subsequent GBAP publish or deployment failures, it emits ERROR messages. If it encounters issues that break with convention or recommendation, it emits WARNING messages.

To capture validation failures within a CICD job state, the tool returns an indicative return code: 0 if the GBAP is valid, otherwise 1.

The in-line validation error messages seek to provide sufficient insight to resolve the related error. Additional help for specific validation errors can be found in the Troubleshooting section of this user guide.

Example 1 - Validate a GBAP in the current working directory:

cd /pathTo/myGBAP
gbear app validate

Example 2 - Validate a GBAP in the specified path:

gbear app validate /pathTo/myGBAP

Expected output for a valid GBAP:

Validating Package: /pathTo/myGBAP
- Validating Application Metadata...
- Validating Chart Dependencies...
***
- Performing Chart Lint...
- Validating Kubernetes Resource Labels...
Validation Complete:
Application Metadata: Passed (Errors: 0, Warnings: 0)
Chart Dependencies: Passed (Errors: 0, Warnings: 0)
Chart Lint: Passed (Errors: 0, Warnings: 0)
Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)

Resulting GBAP tree:

/pathTo/myGBAP
├── Chart.yaml     
|── charts
|   └── myAppChart 
|   |    ├── templates
│   |    |    ├── _helpers.tpl
│   |    |    ├── deployment.yaml
│   |    |    ├── service.yaml
|   |    └── values.yaml
├── gbear
│   └── appmetadata.yaml
|── Chart.lock    # Captured chart dependency version tree of the validated GBAP

1.6.6 - Publish an Application Package

Build and Upload the Application Container Image

All container images used by the Great Bear Application Package (GBAP) must be pushed to an accessible repository as a separate action to the GBAP publish. The required container images can be pushed to a repository using one of the following options:

  • (Preferred): Build and push images to a Great Bear hosted image repository. You are supplied with an image registry URL and credentials as part of your tenant provisioning on the Great Bear platform.

    1. Log in to the Great Bear system using docker. To do this, run the following command, replacing , and with the credentials provided to you by Cisco:

      docker login --username=<tenant-registry-username> --password=<tenant-registry-password> <tenant-registry-url>
      

      Alternatively, you can exclude the username and password from this command, and enter them when prompted.

    2. In order to publish an application you need to build the docker image. To do this, run the following command from the directory containing the Dockerfile, replacing , <application-name> and <version-number> to reflect the values in your Helm chart:

      docker build -t <tenant-registry-url>/<application-name>:<version-number> .
      

      Note: If you are building an application for multiple architectures, see Multi-architecture docker images.

    3. Push this image to the Great Bear repository. Run the following command, replacing , <application-name>, and <version-number> with the values used in the previous step:

      docker push <tenant-registry-url>/<application-name>:<version-number>
      
  • Build and push images to your own public repository which does not require any pull secrets.

Multi-architecture docker images

If you are building an application for multiple architectures then you can use buildx to create your docker image.

  1. First, use the following commands to prepare docker to use buildx:

    # create and switch to using a new buildx builder for multi-arch images
    docker buildx create --use --name=appbuilder
    
  2. Next, use the following modified build command, which will create images for Intel 64-bit and Arm 64-bit architectures. Replace <application-name> and <version-number> to reflect the values in your Helm chart:

    # build images for Intel 64-bit and Arm 64-bit architectures
    docker buildx build --platform linux/amd64,linux/arm64 -t <tenant-registry-url>/<application-name>:<version-number> .
    
    
    

Log in to Great Bear

The SDK tool must be configured with your API key using the gbear login command.

Publish the Great Bear Application Package

The packaging SDK tool provides a publish command to push a validated GBAP to the Application Store hosted on the Great Bear platform.

Usage:
  gbear application publish PATH [flags]
  gbear app publish PATH [flags]

Flags:
  -h, --help   help for publish
      --saveArchive string   Save published archive file to the directory specified as an absolute or relative path

This command takes an optional PATH (defaults to current working directory) to a GBAP, ensures the package is valid before publishing it to the Great Bear platform.

If the command detects any validation errors it emits ERROR messages and does not attempt to publish the GBAP.

If the GBAP is valid, it creates a compressed GBAP archive file, and publishes it to the GB platform API. By default, the created GBAP archive file is deleted from your local disk after the publish command completes. You can use the –saveArchive flag to specify a local path to store the archive file for future reference.

For details on specific publishing errors, see Troubleshooting section of this user guide.

Example 1 - Publish a GBAP in the current working directory:

cd /pathTo/myGBAP
gbear app publish

Expected output if publishing is successful:

Validating Package: /pathTo/myGBAP
-- Validating Application Metadata...
-- Validating Chart Dependencies...
***
-- Validating Kubernetes Resource Labels...
Validation Complete:
Application Metadata: Passed (Errors: 0, Warnings: 0)
Chart Dependencies: Passed (Errors: 0, Warnings: 0)
Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
Publishing Package Archive File: my-gb-app-0.0.1.tgz
Publish Status: Success

Resulting GBAP tree:

/pathTo/myGBAP
├── Chart.yaml     
|── charts
|   └── myAppChart 
|   |    ├── templates
│   |    |    ├── _helpers.tpl
│   |    |    ├── deployment.yaml
│   |    |    ├── service.yaml
|   |    └── values.yaml
├── gbear
│   └── appmetadata.yaml
|── Chart.lock            # Captured chart dependency version tree of the validated GBAP

Example 2 - Publish a GBAP in the specified path, using the –saveArchive flag to store the published GBAP archive file:

gbear app publish /pathTo/myGBAP --saveArchive /pathTo/archives

Expected output on if publishing is successful:

Validating Package: /pathTo/myGBAP
-- Validating Application Metadata...
-- Validating Chart Dependencies...
***
-- Validating Kubernetes Resource Labels...
Validation Complete:
Application Metadata: Passed (Errors: 0, Warnings: 0)
Chart Dependencies: Passed (Errors: 0, Warnings: 0)
Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
Saved Package Archive: /pathTo/archives/my-gb-app-0.0.1.tgz
Publishing Package Archive File: my-gb-app-0.0.1.tgz
Publish Status: Success

Resulting GBAP tree:

/pathTo/myGBAP
├── Chart.yaml     
|── charts
|   └── myAppChart 
|   |    ├── templates
│   |    |    ├── _helpers.tpl
│   |    |    ├── deployment.yaml
│   |    |    ├── service.yaml
|   |    └── values.yaml
├── gbear
│   └── appmetadata.yaml
|── Chart.lock            # Captured chart dependency version tree of the validated GBAP
/pathTo/archives
|── my-gb-app-0.0.1.tgz   # Compressed GBAP archive file.

1.6.7 - Update a Published Application Package

To publish a new version of a Great Bear application, complete the steps that apply to your situation from following steps. Depending on what you have changed in your application (for example, the image, or only the chart), not every step is mandatory. However, at a minimum, you have to update the Great Bear Application Package and publish a new version to the Great Bear Application Store.

  1. If you have updated the image of the application, upload it to the image repository accessible by Great Bear. For details, see Build and Upload the Application Container Image.
  2. If you have updated the application image or any Great Bear Application Package files (for example, Chart.yaml, templates, values.yaml, gbear/appmetadata.yaml)
    • If you have updated the application image, edit the values.yaml file of the correlating chart and update the image.tag field to the new image version.
    • Update the version field of the root Chart.yaml to a new version.
    • Use the packaging SDK tool to publish the new Great Bear Application Package version. For details, see Publish the Great Bear Application Package.

1.6.8 - Delete an Application Package

Application packages are meant to be immutable, which means that if you need to change something, the preferred way of doing so is releasing a new version (typically a patch version, where the last segment of the version number is increased). This approach makes it possible for the system to keep track of different versions and to handle version upgrades gracefully.

Note the following points:

  • By default, new application deployments will automatically use the latest published version of the application.
  • There is no automatic mechanism for running applications to stop working immediately even if an old application version is removed.
  • If you remove an application version from the Application Store, then no application upgrade will be performed automatically, but the application can stop working any time when it needs to load the (removed) application package during its normal operation.

In the rare case when you need to delete an application version from the Application Store, you can use the DELETE /api/v1/applications/<repo>/<app>/<version> HTTP endpoint. You can find the values for <repo>/<app>:

  • on the dashboard at the end of the URL of Application Store entries, in the ApplicationID field of an application deployment’s details page, or
  • in the application list available through the GET /api/v1/applications endpoint.

Delete an application version

  1. Make a note of the application identifier (<repo>/<app>) and the exact version number.

  2. Set your API key as a variable in your shell by running the following command:

    GBEAR_API_KEY=<your-api-key>
    

    To access the API key, login to the Great Bear Dashboard, select the top right Profile icon, then click Copy API Key.

    The Profile menu

  3. Make sure that the given application version is not deployed to any sites.

  4. Issue the DELETE call using cURL:

    curl -H "x-id-api-key: ${GBEAR_API_KEY}" \
      -X DELETE \
      "https://api.greatbear.io/api/v1/applications/<repo>/<app>/<version>"
    

    For example, to delete version 1.2.3 of an application package called my-application in the main repository of my-tenant, you can use the following cURL command:

    curl -H "x-id-api-key: ${GBEAR_API_KEY}" \
      -X DELETE \
      "https://api.greatbear.io/api/v1/applications/my-tenant/my-application/1.2.3"
    
  5. Reload the dashboard and check if the application version disappeared from the Application Store listing.

Delete an application

If you want to delete all versions of an application:

  1. Make a note of the application identifier (<repo>/<app>) and the version numbers available. You can use the Application Store listing of the dashboard, or the GET /api/v1/applications endpoint to collect them.
  2. Complete the Delete an application version procedure for each version.
  3. Reload the dashboard and check if the application disappeared from the Application Store listing.

1.7 - Samples

To help you get started with Great Bear, we have created several tutorials that cover:

1.7.1 - Developing an App for Great Bear

This tutorial is the first part of a four part series. The series is about creating a basic application for Great Bear, extending it with a liveness probe, metrics and finally publishing it.

Other parts of the series:

This guide gives an example of how to develop an application for Great Bear. The guide assumes that you are already familiar with using docker and creating docker images, creating Helm charts for Kubernetes, and deploying applications to Kubernetes clusters using Helm charts. The guide also assumes that you have docker and helm installed, and the required access credentials for Great Bear. For more information on the development process for Great Bear, see Great Bear Application Lifecycle.

You are going to create a very basic application called ‘Bear Echo’, which is one of the simplest apps you can deploy with Great Bear, and can be useful for testing purposes. Bear Echo will use python Flask to serve a simple webpage, and allow users to define a custom text message when deploying the app that is then used as header in the rendered web page.

Create the application and Dockerfile

  1. First step is to create a new directory called bear-echo where you will save the code for the application.

  2. Next we need to write the python code for the Bear Echo server, create a file in the bear-echo directory called server.py, copy the following code and save the file:

    # bear-echo/server.py
    
    import os
    import sys
    import importlib
    from flask import Flask
    
    _HTTP_PORT: int = int(os.getenv('HTTP_PORT') or '11777')
    _BASE_URL: str = os.getenv('BASE_URL') or '/'
    _ECHO_TEXT: str = os.getenv('ECHO_TEXT') or 'Hello World'
    
    app = Flask(__name__)
    
    @app.route(_BASE_URL, methods=['GET'])
    def echo():
        return(f'<hr><center><h1>{_ECHO_TEXT}</h1><center><hr>')
    
    def _main():
        print('ECHO_TEXT: ' + _ECHO_TEXT)
        print(f'Started on 0.0.0.0:{_HTTP_PORT}{_BASE_URL}')
        app.run(host='0.0.0.0', port=_HTTP_PORT)
    
    if __name__ == '__main__':
        _main()
    
  3. Now we have our python application, we need a Dockerfile to create the image that will be uploaded to the Great Bear Application Store,. Create a file in the bear-echo directory called Dockerfile, copy the following code and save the file:

    # bear-echo/Dockerfile
    
    FROM python:3.8-alpine
    
    WORKDIR /echo
    
    RUN pip install flask
    
    COPY server.py ./
    
    EXPOSE 11777
    
    CMD [ "python", "server.py" ]
    

Create the Great Bear Application Package (GBAP)

  1. With the Dockerfile created, you can now move onto creating the GBAP, used to publish our application to the Great Bear Application Store for subsequent deployment to Great Bear managed sites through the Great Bear Dashboard.
  2. Follow the steps to install and configure the Great Bear Packaging SDK tool.
  3. Use the SDK tool to create a new GBAP instance within your bear-echo directory, entering the desired fields as prompted:
    $ gbear app create bear-echo/GBAP
    Creating a GBAP with required fields....
    Enter the chart name:
    bear-echo
    Enter the chart version [SemVer2 MAJOR.MINOR.PATCH], (leave empty for default: 0.0.1):
    
    Enter the application name:
    bear-echo
    Enter the application description:
    A simple HTTP echo service, based on Python Flask.
    Successfully created a GBAP in path bear-echo/GBAP
    
  4. The SDK tool has now created a boiler plate GBAP filesystem tree:
    bear-echo/GBAP
    ├── Chart.yaml
    ├── gbear
    │   └── appmetadata.yaml
    

Author the GBAP Assets

The GBAP contains the following two fundamental elements:

  1. A Helm Chart to describe a set of Kubernetes resources used to deploy the application at Great Bear managed sites.
  2. Great Bear specific application metadata (gbear/appmetadata), containing:
    • Properties used to render the application in the Great Bear Application Store
    • Properties used to define application parameters which can be overridden when deploying the Helm Chart.

For further details on the GBAP structure, see Great Bear Application Package Guide

Create the Helm values file

  1. Inside the GBAP directory, we next need to create a file called values.yaml.

    bear-echo/GBAP
    ├── Chart.yaml
    ├── gbear
    │   └── appmetadata.yaml
    ├── values.yaml
    
  2. This file will contain the default values for our Bear Echo application, and includes read-only parameters like GB_SITE_NAME and GB_NODE_NAMES which are provided by Great Bear as environment variables when it deploys the application. It also defines the image repository location, and you should make sure to replace <tenant-registry-url> in the repository value to the tenant registry URL communicated to you by Cisco.

    bear-echo/GBAP/values.yaml:

    replicaCount: 1
    strategyType: Recreate
    
    image:
      repository: <tenant-registry-url>/bear-echo
      tag: 0.0.1
      pullPolicy: IfNotPresent
    
    imagePullSecrets:
    - name: gbear-harbor-pull
    
    config:
      # Config parameters described in application metadata
      # Their values will be injected here by the Great Bear deployer
      #
      GB_SITE_NAME: |
            Site name with "quotes" and 'quotes'
      GB_NODE_NAMES: |
            {"node1":"Display name 1 with \"quotes\" and 'quotes'","node2":"Display name 2 with \"quotes\" and 'quotes'"}
      ACCESS_PORT: |
            "31777" 
    service:
      type: NodePort
      port: 11777
    
    ports:
      http:
        port: 11777
        protocol: TCP
        nodePort: "{{ default 31777 (int .Values.config.ACCESS_PORT)}}"
    
    # ----
    
    nameOverride: ""
    
    fullnameOverride: ""
    
    nodeSelector: {}
    
    affinity: {}
    
    ingress:
        enabled: false
        istioGateway: ""
        servicePath: ""
    
    siteId: ""
    

Create the Helm template files

  1. The next step is to create the template files that will create our Kubernetes deployment and the service endpoint for the deployment, along with some template helpers that are re-used throughout the chart. Therefore, inside the GBAP directory, create a new directory called templates

    bear-echo/GBAP
    ├── Chart.yaml
    ├── gbear
    │   └── appmetadata.yaml
    ├── templates
    
  2. The first file we will create in the new templates directory is _helpers.tpl, which will hold some template helpers that we can re-use throughout the chart. Your bear-echo/GBAP/templates/_helpers.tpl file should contain the following helpers:

    {{/*
    Expand the name of the chart.
    */}}
    {{- define "app.name" -}}
    {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
    {{- end -}}
    
    {{/*
    Create a default fully qualified app name.
    We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
    If release name contains chart name it will be used as a full name.
    */}}
    {{- define "app.fullname" -}}
    {{- if .Values.fullnameOverride -}}
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
    {{- else -}}
    {{- $name := default .Chart.Name .Values.nameOverride -}}
    {{- if contains $name .Release.Name -}}
    {{- .Release.Name | trunc 63 | trimSuffix "-" -}}
    {{- else -}}
    {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
    {{- end -}}
    {{- end -}}
    {{- end -}}
    
    {{/*
    Create chart name and version as used by the chart label.
    */}}
    {{- define "app.chart" -}}
    {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
    {{- end -}}
    
    {{/*
    Common labels
    */}}
    {{- define "app.labels" -}}
    app.kubernetes.io/name: {{ .Chart.Name }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    helm.sh/chart: {{ include "app.chart" . }}
    {{- end -}}
    
    {{/*
    Selector labels
    */}}
    {{- define "app.selector" -}}
    app.kubernetes.io/name: {{ .Chart.Name }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    {{- end -}}
    
  3. The next file to create in the templates directory is the deployment.yaml file, which will provide a basic manifest for creating the Kubernetes deployment our Bear Echo application. In our case, the bear-echo/GBAP/templates/deployment.yaml should contain:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "app.fullname" . }}
      labels: {{- include "app.labels" . | nindent 4 }}
    spec:
      replicas: {{ .Values.replicaCount }}
      strategy:
        type: {{ .Values.strategyType }}
      selector:
        matchLabels: {{- include "app.selector" . | nindent 6 }}
      template:
        metadata:
          labels: {{- include "app.selector" . | nindent 8 }}
        spec:
        {{- with .Values.imagePullSecrets }}
          imagePullSecrets:
            {{ toYaml . | indent 8 | trim }}
        {{- end }}
        {{- with .Values.podSecurityContext }}
          securityContext:
            {{ toYaml . | indent 8 | trim }}
        {{- end }}
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
    
            {{- with .Values.config }}
              {{- range $key, $value := . }}
              - name: {{$key}}
                # Some config parameters (GB_SITE_NAME, GB_NODE_NAMES)
                # can contain both quotes "" and '', for example: {"node1": "User's home"}
                # and the value itself should be quoted in order not to be parsed as object
                # so it's required to put it as a text field to make both quotes processable
                value: |
                                {{$value -}}
              {{- end }}
            {{- end }}
    
            {{- with .Values.ports }}
              ports:
              {{- range $key, $value := . }}
                - name: {{ $key }}
                  containerPort: {{ $value.port }}
                  protocol: {{ default "TCP" $value.protocol }}
              {{- end }}
            {{- end }}
            {{- with .Values.securityContext }}
              securityContext:
                {{ toYaml . | indent 12 | trim }}
            {{- end }}
            {{- with .Values.resources }}
              resources:
                {{ toYaml . | indent 12 | trim }}
            {{- end }}
        {{- with .Values.nodeSelector }}
          nodeSelector:
            {{ toYaml . | indent 8 | trim }}
        {{- end }}
        {{- with .Values.affinity }}
          affinity:
            {{ toYaml . | indent 8 | trim }}
        {{- end }}
        {{- with .Values.tolerations }}
          tolerations:
            {{ toYaml . | indent 8 | trim }}
        {{- end }}
    
  4. The final file that is needed to complete the templates directory is the service.yaml file, which provides a basic manifest for creating a service endpoint the Bear Echo deployment. The bear-echo/GBAP/templates/service.yaml should contain this code:

    apiVersion: v1
    kind: Service
    metadata:
      name: {{ include "app.fullname" . }}
      labels: {{- include "app.labels" . | nindent 4 }}
    {{- with .Values.service.annotations }}
      annotations:
      {{- range $key, $value := . }}
        {{ $key }}: {{ $value | quote }}
      {{- end }}
    {{- end }}
    spec:
      type: {{ .Values.service.type }}
    {{- with .Values.ports }}
      ports:
      {{- range $key, $value := . }}
        - name: {{ $key }}
          port: {{ $value.port }}
          nodePort: {{ tpl $value.nodePort $ }}
          protocol: {{ default "TCP" $value.protocol }}
      {{- end }}
    {{- end }}
      selector: {{- include "app.selector" . | nindent 4 }}
    

Customise the Great Bear Application Metadata

  1. The SDK tool has generated the following boiler plate gbear/appmetadata.yaml file, containing our application name and description entered when we created the GBAP.

    bear-echo/GBAP/gbear/appmetadata.yaml:

    displayName: bear-echo
    description: A simple HTTP echo service, based on Python Flask.
    # The following optional fields provide a more detailed properties to describe your GBAP.
    # Uncomment as required and modify field values specifically for your application.
    
    # Specify whether the application should be deployed once per great bear site (defaults to false if not explicitly defined)
    #singleton: true
    
    # A dictionary (key-values) to label applications to handle specially like kind: EXAMPLE These should be explicit attributes.
    #labels:
    #   kind: EXAMPLE
    
    # A list of tags used by the Great Bear application store to present applications such as e.g the private attribute.
    #tags:
    #   - example-apps
    #   - blog
    
    # A list of architectures supported like [amd64, arm64v8]
    #architectures:
    #   - amd64
    
    # A path/URL of icon to use in the app store
    #icon: http://example.com/appicon
    
    # A list of configuration fields that are provided as run-time values of the Helm deployment.
    #configuration:
    #   - name: outputMessage                       # The name of the field.
    #     key: application.message                  # A dot-separated name where the value of this field will be added to the Helm values. Like postgres.image.tag. Defaults to config.{name} (current behavior).
    #     title: Output message                     # The label of the parameter to be displayed on the Great Bear dashboard when deploying the application.
    #     description: Message sent to the user     # The description of the parameter to be displayed on the Great Bear dashboard when deploying the application.
    #     value: Hello World                        # The default value of the parameter
    #     type: String                              # A simple string, rendered as a textbox
    #   - name: password
    #     key: application.password
    #     title: Initial password
    #     type: Secret                              # A string, rendered as a password input.
    #   - nodeport
    #     key: service.nodePorts.https
    #     type: NodePort                            # A port number in the 30000-32767 range.
    #     title: Node port to serve HTTPS on
    #     value: 37777
    #   - name: sitename
    #     key: application.mySite
    #     type: Runtime                             # An injected variable via Go Templating performed on the value of this field. Field is not shown on the UI.
    #     value: This application is running in {{.GB_SITE_ID}}  # This value will be substituted by GB when the application is deployed to the edge
    #   - name: forceLoginToggle
    #     key: application.forceLogin
    #     choices: [yes, no]                         # UI renders the parameter as a toggle switch, the UI will recognise the [yes, no] options as a toggle switch.
    #     value: yes                                 # Dashboard will display a toggle switch in the enabled position
    #   - name: applicationTextColor
    #     key: application.textColor
    #     choices: [black, red, blue]                # UI renders the parameter as a dropdown list, containing the specified values.
    #     value: red                                 # Dashboard will display a dropdown list with specified value 'red' as the default
    
  2. The next step is to specialise this boiler plate file with specific application metadata for our bear-echo application, editing properties to determine its visibility in the Application Store, and specify which parameters a user can set when the application is deployed. Edit your bear-echo/GBAP/gbear/appmetadata.yaml file to include the following properties, which enables ECHO_TEXT and ACCESS_PORT to be defined by a user deploying the application through Great Bear, and other important details:

    displayName: bear-echo
    description: A simple HTTP echo service, based on Python Flask.
    icon: https://raw.githubusercontent.com/FortAwesome/Font-Awesome/5.15.4/svgs/solid/tools.svg 
    tags:
    - debug
    - sample
    # Defines whether the app can be deployed to the same site several times
    singleton: true    
    configuration:
      - name: "ECHO_TEXT"
        title: "Message to say"
        description: "The application will echo this text"
        value: "A Hello from Great Bear."            # Default used if not overridden by the user
        type: String
      - name: "ACCESS_PORT"
        title: "Access HTTP port (30000-32767)"
        value: 31777                               # Default used if not overridden by the user
        type: NodePort
    

Validate the GBAP

  1. Now that we have finished authoring our GBAP assets, we can use the SDK tool to validate the contents:
$ gbear app validate bear-echo/GBAP
Validating Package: charts/
-- Validating Application Metadata...
-- Validating Chart Dependencies...
-- Validating Kubernetes Resource Labels...
Validation Complete:
Application Metadata: Passed (Errors: 0, Warnings: 0)
Chart Dependencies: Passed (Errors: 0, Warnings: 0)
Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)

For further details on GBAP validation, see Validate an Application Package

  1. As the GBAP contains a Helm Chart, it is good practice to test that the Chart can be successfully installed to a local Kubernetes cluster using
    helm install bear-echo charts/ --dry-run --debug

Next steps

Assuming the everything is setup correctly, we can now move on to extend the application, see:

1.7.2 - Publishing an App to Great Bear

This tutorial is the last part of the four part series showcasing the development of an application for Great Bear.

Other parts of the series:

This guide gives an example of how to publish the Bear Echo application developed in the previous guide to the Great Bear Application Store. If you haven’t created the Bear Echo application yet, then please take a look at Developing an App for Great Bear.

The guide assumes that you are already familiar with using docker and creating docker images, creating Helm charts for Kubernetes, and deploying applications to Kubernetes clusters using Helm charts. The guide also assumes that you have docker and helm installed, and have the required access credentials for Great Bear. For more information on the development process for Great Bear, see Creating, Publishing and Deleting Applications.

You are now going to publish the ‘Bear Echo’ application to Great Bear, which will make it appear in the Application Store and available to deploy to Great Bear managed sites. This is a two stage process, where:

Build and upload the application image

  1. First, navigate to the directory called bear-echo where you created the Python code, Dockerfile, and Helm chart for the application.

  2. Log in to the Great Bear system using docker. To do this, run the following command, replacing , and with the credentials provided to you by Cisco:

    docker login --username=<tenant-registry-username> --password=<tenant-registry-password> <tenant-registry-url>
    

    Alternatively, you can exclude the username and password from this command, and enter them when prompted.

  3. In order to publish an application you need to build the docker image. To do this, run the following command from the directory containing the Dockerfile, replacing , <application-name> and <version-number> to reflect the values in your Helm chart:

    docker build -t <tenant-registry-url>/<application-name>:<version-number> .
    

    Note: If you are building an application for multiple architectures, see Multi-architecture docker images.

    In this case, use bear-echo:0.0.1 for the application name and version.

  4. Push this image to the Great Bear repository. Run the following command, replacing , <application-name>, and <version-number> with the values used in the previous step:

    docker push <tenant-registry-url>/<application-name>:<version-number>
    

    In this case, use bear-echo:0.0.1 for the application name and version.

Publish Great Bear Application Package

  1. Follow the steps to install and configure the Great Bear Packaging SDK tool.

  2. Use the SDK tool to publish the echo-bear package to the Great Bear Application Store. Run the following command:

    gbear app publish echo-bear/GBAP
    

    Expected output:

    Validating Package: echo-bear/GBAP/
    -- Validating Application Metadata...
    -- Validating Chart Dependencies...
    -- Validating Kubernetes Resource Labels...
    Validation Complete:
    Application Metadata: Passed (Errors: 0, Warnings: 0)
    Chart Dependencies: Passed (Errors: 0, Warnings: 0)
    Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
    Building Package Archive File...
    Publishing Package Archive File: charts/bear-echo-0.0.1.tgz
    Publish Status: Success
    

Bear Echo App Store Bear Echo App Store

Deploy and access Bear Echo

The Bear Echo application you’ve created and published is one of the simplest apps you can deploy with Great Bear, and can be useful for testing purposes. Bear Echo uses python Flask to serve a simple webpage, and allow users to define a custom text message when deploying the app that is then used as header in the rendered web page.

Configuration

  • Message to say: The text message you want to be shown in the web page.
  • Access HTTP port: The TCP port the HTTP server listens to.

Bear Echo parameters Bear Echo parameters

Access

  1. Open a browser and access the HTTP server using the port previously configured.

    You will see a webpage containing the message you have configured.

    Bear Echo output Bear Echo output

    Note: To know which IP address to use, run ifconfig or ip a in a terminal on the node.

  2. (Optional) If you are deploying the Bear Echo app on a VM based node, then you can take following steps to view the message in a browser from the host machine:

    1. On the host machine, enable SSH port forwarding. How to do that depends on the type of the node. For example, on a vagrant node, run:

      vagrant ssh node-0 -- -L 31777:localhost:31777
      
    2. Open a browser and access the HTTP server on localhost:31777

      Bear Echo forward Bear Echo forward

1.7.3 - Adding Liveness Probe to an app

This guide shows you how to extend a Great Bear application with a basic liveness probe. Liveness probes can be helpful in case of long running applications which might get into a broken state, when the application is still running but it is not functional anymore and it needs to be restarted. The simplest way to achieve this is to add a healthz endpoint which is served by the application. If this endpoint does not respond, or it responds with an error, the application is restarted.

The following steps extend the Bear Echo application with a basic liveness probe.

Add healthz endpoint to the application

Add a new endpoint to your application. This endpoint will serve the incoming requests checking whether the app is still functional. For the Bear Echo application, add the following lines to the server.py file:

@app.route("/healthz")
def healthz():
    return "OK"

For reference, here’s the extended server.py:

# bear-echo/server.py

import os
import sys
import importlib
from flask import Flask

_HTTP_PORT: int = int(os.getenv('HTTP_PORT') or '11777')
_BASE_URL: str = os.getenv('BASE_URL') or '/'
_ECHO_TEXT: str = os.getenv('ECHO_TEXT') or 'Hello World'

app = Flask(__name__)

@app.route(_BASE_URL, methods=['GET'])
def echo():
    return(f'<hr><center><h1>{_ECHO_TEXT}</h1><center><hr>')

@app.route("/healthz")
def healthz():
    return "OK"

def _main():
    print('ECHO_TEXT: ' + _ECHO_TEXT)
    print(f'Started on 0.0.0.0:{_HTTP_PORT}{_BASE_URL}')
    app.run(host='0.0.0.0', port=_HTTP_PORT)

if __name__ == '__main__':
    _main()

Update deployment with the new liveness probe

Add a new liveness probe to the deployment file of the Helm chart of your application, for example:

livenessProbe:
  httpGet:
    path: /healthz
    port: {{ .Values.ports.http.port }}
  initialDelaySeconds: 5
  timeoutSeconds: 2
  periodSeconds: 10
  failureThreshold: 3

For reference, here’s the extended bear-echo/GBAP/templates/deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "app.fullname" . }}
  labels: {{- include "app.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  strategy:
    type: {{ .Values.strategyType }}
  selector:
    matchLabels: {{- include "app.selector" . | nindent 6 }}
  template:
    metadata:
      labels: {{- include "app.selector" . | nindent 8 }}
    spec:
    {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{ toYaml . | indent 8 | trim }}
    {{- end }}
    {{- with .Values.podSecurityContext }}
      securityContext:
        {{ toYaml . | indent 8 | trim }}
    {{- end }}
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          livenessProbe:
            httpGet:
            path: /healthz
            port: {{ .Values.ports.http.port }}
            initialDelaySeconds: 5
            timeoutSeconds: 2
            periodSeconds: 10
            failureThreshold: 3
          env:

        {{- with .Values.config }}
          {{- range $key, $value := . }}
          - name: {{$key}}
            # Some config parameters (GB_SITE_NAME, GB_NODE_NAMES)
            # can contain both quotes "" and '', for example: {"node1": "User's home"}
            # and the value itself should be quoted in order not to be parsed as object
            # so it's required to put it as a text field to make both quotes processable
            value: |
                            {{$value -}}
          {{- end }}
        {{- end }}

        {{- with .Values.ports }}
          ports:
          {{- range $key, $value := . }}
            - name: {{ $key }}
              containerPort: {{ $value.port }}
              protocol: {{ default "TCP" $value.protocol }}
          {{- end }}
        {{- end }}
        {{- with .Values.securityContext }}
          securityContext:
            {{ toYaml . | indent 12 | trim }}
        {{- end }}
        {{- with .Values.resources }}
          resources:
            {{ toYaml . | indent 12 | trim }}
        {{- end }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
        {{ toYaml . | indent 8 | trim }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
        {{ toYaml . | indent 8 | trim }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
        {{ toYaml . | indent 8 | trim }}
    {{- end }}

Update and redeploy your application for the changes to take effect.

Next steps

By following these steps you should now have an application that is restarted if the liveness probe fails.

To improve you application further, you can Adding Prometheus metrics to an app.

1.7.4 - Adding Prometheus metrics to an app

This guide shows you how to collect metrics of a Great Bear application you are developing. Metrics allow you to observe and monitor your application. The metrics are collected by Prometheus and they can be visualized on Grafana. For details, see Observing Great Bear.

The following steps extend the Bear Echo application with basic metrics, and assume that you have already followed the previous sample on Adding Liveness Probe to an app.

Add metrics endpoint to the application

First you need to add a new metrics endpoint to your application so when Prometheus scrapes the HTTP endpoint of the application, the client library sends the current state of all tracked metrics to the Prometheus server.

This example uses the official Python client library for Prometheus and its example on adding metrics to a Flask application.

  1. Add the metrics endpoint to the server.

    from werkzeug.middleware.dispatcher import DispatcherMiddleware
    from prometheus_client import make_wsgi_app
    
    app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {
        '/metrics': make_wsgi_app()
    })
    
  2. Add a metric to the application. This example uses a counter metric type, that is incremented by one every time the base url is accessed.

    from prometheus_client import Counter
    
    _COUNTER_METRIC = Counter('num_of_queries', 'Number of queries')
    
    @app.route(_BASE_URL, methods=['GET'])
    def echo():
        _COUNTER_METRIC.inc()
        return(f'<hr><center><h1>{_ECHO_TEXT}</h1><center><hr>')
    

    For reference, here’s the extended server.py of the Bear Echo application:

    # bear-echo/server.py
    
    import os
    import sys
    import importlib
    from flask import Flask
    from prometheus_client import Counter
    from werkzeug.middleware.dispatcher import DispatcherMiddleware
    from prometheus_client import make_wsgi_app
    
    
    _HTTP_PORT: int = int(os.getenv('HTTP_PORT') or '11777')
    _BASE_URL: str = os.getenv('BASE_URL') or '/'
    _ECHO_TEXT: str = os.getenv('ECHO_TEXT') or 'Hello World'
    
    _COUNTER_METRIC = Counter('num_of_queries', 'Number of queries')
    
    app = Flask(__name__)
    
    app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {
        '/metrics': make_wsgi_app()
    })
    
    
    @app.route(_BASE_URL, methods=['GET'])
    def echo():
        _COUNTER_METRIC.inc()
        return(f'<hr><center><h1>{_ECHO_TEXT}</h1><center><hr>')
    
    
    @app.route("/healthz")
    def healthz():
        return "OK"
    
    
    def _main():
        print('ECHO_TEXT: ' + _ECHO_TEXT)
        print(f'Started on 0.0.0.0:{_HTTP_PORT}{_BASE_URL}')
        app.run(host='0.0.0.0', port=_HTTP_PORT)
    
    
    if __name__ == '__main__':
        _main()
    

Update the Dockerfile

  1. The official example recommends using uwsgi to run the application. Update the Dockerfile of the application to do that.

    # bear-echo/Dockerfile
    
    FROM python:3.8-alpine
    
    WORKDIR /echo
    
    COPY requirements.txt .
    
    RUN set -e; \
        apk add --no-cache --virtual .build-deps \
        gcc \
        libc-dev \
        linux-headers \
        ; \
        pip install --no-cache-dir -r requirements.txt; \
        apk del .build-deps;
    
    COPY server.py ./
    
    EXPOSE 11777
    
    CMD uwsgi --http 0.0.0.0:11777 --wsgi-file server.py --callable app
    
  2. You can see that the Dockerfile now uses a bear-echo/requirements.txt file to keep track of the dependencies used by the application server. Create this file with the following content:

    # bear-echo/requirements.txt
    
    flask
    prometheus-client
    uwsgi
    

Update the Chart to make metrics available

Providing metrics is optional, the user can enable metrics when deploying the app. If metrics are enabled, a podmonitor is deployed. Prometheus discovers the metrics endpoint of your app via the podmonitor. You can customize several settings related to the scraping.

  1. In order to add the podmonitor to the Chart, create the bear-echo/GBAP/templates/podmonitor.yaml file with the following content.

    {{- if eq .Values.config.MONITORING "on" }}
    apiVersion: monitoring.coreos.com/v1
    kind: PodMonitor
    metadata:
      name: {{ include "app.fullname" . }}-prometheus-exporter
      labels: {{- include "app.labels" . | nindent 4 }}
    spec:
      podMetricsEndpoints:
        - interval: 15s
          port: http
          path: /metrics
      jobLabel: {{ template "app.fullname" . }}-prometheus-exporter
      namespaceSelector:
        matchNames:
          - {{ .Release.Namespace }}
      selector:
        matchLabels:
          {{- include "app.selector" . | nindent 6 }}
    {{- end }} 
    
  2. In order to allow users to decide whether to enable metrics when they are deploying the Bear Echo app, we can add a toggle switch for the podmonitor as a deployment parameter within the GBAP inside the gbear/appmetadata.yaml:

        - name: MONITORING
          title: Enable monitoring. Make sure you have enabled observability for the site you're deploying beforehand.
          value: "off"
          choices: ["on", "off"]
    

    For reference, here’s the full bear-echo/GBAP/gbear/appmetadata.yaml file of the Bear Echo application:

    displayName: bear-echo
    description: A simple HTTP echo service, based on Python Flask.
    icon: https://raw.githubusercontent.com/FortAwesome/Font-Awesome/5.15.4/svgs/solid/tools.svg 
    tags:
    - debug
    - sample    
    singleton: true    # Defines whether the app can be deployed to the same site several times
    configuration:
      - name: "ECHO_TEXT"
        title: "Message to say"
        description: "The application will echo this text"
        value: "A Hello from Great Bear."            # Default used if not overridden by the user
        type: String
      - name: "ACCESS_PORT"
        title: "Access HTTP port (30000-32767)"
        value: 31777                               # Default used if not overridden by the user
        type: NodePort
      - name: MONITORING
        title: Enable monitoring. Make sure you have enabled observability for the site you're deploying beforehand.
        value: "off"
        choices: ["on", "off"]
    
  3. Finally, you need to edit the bear-echo/GBAP/values.yaml file and add the monitoring variable to the config section:

    config:
      MONITORING: ""
    

    For reference, here’s the full Values file:

    replicaCount: 1
    strategyType: Recreate
    
    image:
      repository: harbor.eticloud.io/gbear-app-stage/bear-echo
      tag: develop
      pullPolicy: IfNotPresent
    
    imagePullSecrets:
      - name: gbear-harbor-pull
    
    config:
      # Config parameters described in application metadata
      # Their values will be injected here by the Great Bear deployer
      #
      GB_SITE_NAME: |
            Site name with "quotes" and 'quotes'
      GB_NODE_NAMES: |
            {"node1":"Display name 1 with \"quotes\" and 'quotes'","node2":"Display name 2 with \"quotes\" and 'quotes'"}
      ACCESS_PORT: |
            "31777"
      MONITORING: ""
    
    service:
      type: NodePort
      port: 11777
    
    ports:
      http:
        port: 11777
        protocol: TCP
        nodePort: "{{ default 31777 (int .Values.config.ACCESS_PORT)}}"
    
    # ----
    
    nameOverride: ""
    
    fullnameOverride: ""
    
    nodeSelector: {}
    
    affinity: {}
    
    ingress:
      enabled: false
      istioGateway: ""
      servicePath: ""
    
    siteId: ""
    
  4. Update and redeploy your application for the changes to take effect.

Next steps

By following these steps you should now have an application with metrics delivered to Prometheus, which should also be visible in Grafana. For more information on Grafana, see Observing Great Bear.

Bear Echo Metrics Bear Echo Metrics

1.7.5 - Extending an App with the Sync Server

This guide shows you how to extend the Bear Echo sample app to include the Sync Server. The Sync Server allows the App Control Service to communicate with the app in real time, for example, to change the configuration of the deployed app.

Outline

  1. Goal
  2. Architecture
  3. Extending Bear Echo with the Sync Server
  4. Docker Image
  5. Helm Packaging
  6. Great Bear Metadata

Goal

The Developing an App for Great Bear tutorial has shown you how to create the Bear Echo application. But what if you want to interact with the application at the edge after it has already been deployed? This demo shows you how to modify the ECHO_TEXT variable at runtime, without triggering new deployments.

The goal is to deploy the same Bear Echo application, and display an HTML page with the most recent data payload it has received. At first, it will be the same ECHO_TEXT that was provided as an environment variable at deploy time, as in the original sample.

You can use the Application Control Service dashboard to connect to the edge site and send data payloads by using the Sync Server.

Architecture

The Sync Server is deployed on the edge, along side your application (Bear Echo in this case). Once it is running, it connects up to the Application Control Service using a WebSocket connection. It then creates listeners and exposes events for your application to use.

Sync Server Architecture Sync Server Architecture

Practically, your application needs to provide a few configurations when connecting to the Sync Server, and then add some Socket.io listeners for the events the Sync Server emits to you.

These events are outlined in the bear-echo/app/syncer.py SyncClient that is shown later.

Extending Bear Echo

Let’s revisit the Bear Echo app, and add some additional calls to connect to the Sync Server, and do something with one of the event handlers.

To configure connecting to the Sync Server, you need:

  1. The URL of the Sync Server for the Sync Client to connect to.
  2. The ID of the Great Bear Node where the application (Bear Echo) is running.
  3. The name of the application itself.

The last two are used by the App Control Service to identity the particular application running on the particular node. This should look mostly similar, but the important parts will be highlighted:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# bear-echo/app/main.py

from flask import Flask
from syncer import SyncClient
import json
import os
import uuid

HTTP_PORT: int = int(os.getenv('HTTP_PORT') or '11777')
BASE_URL: str = os.getenv('BASE_URL') or '/'
ECHO_TEXT: str = os.getenv('ECHO_TEXT') or 'Hello World'

SYNC_SERVER_HOST: str = os.getenv('SYNC_SERVER_HOST')
SYNC_SERVER_PORT: str = os.getenv('SYNC_SERVER_PORT')
GB_NODE_ID: str = os.getenv('GB_NODE_ID') or ''
GB_NODE_IDS: str = os.getenv('GB_NODE_IDS') or '{}'
APP_NAME: str = os.getenv('APP_NAME') or 'bear-echo-runtime'

class EchoSyncer(SyncClient):
    """extends the SyncClient and overrides the "onNewData" listener"""
    def onNewData(self, data):
        global ECHO_TEXT
        print(f'ECHO_TEXT updated from {ECHO_TEXT} to {data}')
        ECHO_TEXT = data

app = Flask(__name__)

@app.route(BASE_URL, methods=['GET'])
def echo():
    return(f'<hr><center><h1>{ECHO_TEXT}</h1><center><hr>')

def main():
    print('ECHO_TEXT: ' + ECHO_TEXT)
    print(f'Started on 0.0.0.0:{HTTP_PORT}{BASE_URL}')
    nodeIDMap = json.loads(GB_NODE_IDS)
    nodeID = nodeIDMap[GB_NODE_ID] if GB_NODE_ID in nodeIDMap else str(uuid.uuid4())
    sync = EchoSyncer(
        syncServerHost='http://%s:%s' % (SYNC_SERVER_HOST, SYNC_SERVER_PORT),
        gbNodeID=nodeID,
        appName=APP_NAME,
        deployID='demo',
    )
    sync.start()
    app.run(host='0.0.0.0', port=HTTP_PORT) # blocking
    sync.stop()

if __name__ == '__main__':
    main()

This is almost the same as the original Bear Echo sample, except for:

  • the extra configuration parameters, and
  • the EchoSyncer class.

The EchoSyncer class extends the SyncClient with extra functionality and default websocket handlers, and it overrides the onNewData handler to update the ECHO_TEXT. That way Bear Echo displays the most recent data payload in an HTML page.

Okay, but what does the SyncClient actually do? Let’s take a look:

# bear-echo/app/syncer.py

from datetime import datetime
from enum import Enum
from threading import Thread
import socketio
import logging
import time

logging.basicConfig(level=logging.DEBUG)


class AppStatus(Enum):
    NULL = 0
    ONLINE = 1
    ERROR = 2
    OFFLINE = 3
    INITIALIZING = 4
    PAYLOAD_ERROR = 5


def nowTS():
    # now timestamp as nondelimited year-month-day-hour-minute-second string
    return datetime.now().strftime("%Y%m%d%H%M%S")


class SyncClient:
    """
    To be used directly, or subclassed with desired methods overridden.
    public methods to override:
        onConnect()
        onConnectError(errMsg)
        onDisconnect()
        onNewData(data)
        onRemoveContent()
        onCheckStatus()
        onCatchAll(data)

    Once the SyncClient is initialized, start listening with
        sync.start()

    Stop with:
        sync.stop()
    """
    def __init__(self, syncServerHost, gbNodeID, appName, deployID=nowTS()):
        self.syncServerHost = syncServerHost
        self.gbNodeID = gbNodeID
        self.appName = '%s-%s' % (appName, deployID)
        self.appID = '%s-%s-%s' % (gbNodeID, appName, deployID)
        self.fullSyncServerURL = '%s?nodeId=%s&appName=%s&appId=%s' % (
                self.syncServerHost,
                self.gbNodeID,
                self.appName,
                self.appID)

        logging.debug('initializing sync client...')
        logging.debug('    syncServerHost: <%s>', self.syncServerHost)
        logging.debug('    nodeID:         <%s>', self.gbNodeID)
        logging.debug('    appName:        <%s>', self.appName)
        logging.debug('    appID:          <%s>', self.appID)

        self.running = False
        self.connected = False
        self.connection = Thread(target=self.connect, daemon=False)

    # overide me
    def onConnect(self):
        logging.info('%s connected', self.appName)

    # overide me
    def onConnectError(self, err):
        logging.error('%s connection error: %s', self.appName, err)

    # overide me
    def onDisconnect(self):
        logging.info('%s disconnected', self.appName)

    # overide me
    def onNewData(self, data):
        logging.info('%s received new data: <%s>', self.appName, data)
        # supports an optional return to emit a response back to the server.
        # if an exception is raised, an error response will be emitted back to
        # the server.
        return None

    # overide me
    def onRemoveContent(self):
        logging.info('%s received remove content', self.appName)
        # supports an optional return
        return None

    # overide me
    def onCheckStatus(self, msg=None):
        logging.info('%s received on check status: %s', self.appName, msg)
        # supports an optional return
        return None

    # overide me
    def onCatchAll(self, data=None):
        logging.info('%s received an unexpeected event: <%s>', self.appName, data)

    # not blocking
    def start(self):
        logging.info('%s starting...', self.appName)
        self.running = True
        self.connection.start()

    # blocking
    def join(self):
        self.connection.join()

    def stop(self):
        self.running = False
        self.connected = False

    def connect(self):
        sio = socketio.Client(logger=False, engineio_logger=False, ssl_verify=False)
        logging.info('%s connecting to <%s>...', self.appName, self.fullSyncServerURL)

        @sio.event
        def connect():
            self.onConnect()

        @sio.event
        def connect_error(err):
            self.onConnectError(err)

        @sio.event
        def disconnect():
            self.onDisconnect()
            self.connected = False

        @sio.on('newData')
        def onNewData(data):
            try:
                resp = self.onNewData(data)
                if resp is not None:
                    logging.info('onNewData handler returned a response: <%s>', resp)
                    return resp
                return "ok"
            except Exception as e:
                logging.error('onNewData handler raised an exception: %s', e)
                return "error: %s" % e


        @sio.on('removeContent')
        def onRemoveContent():
            try:
                resp = self.onRemoveContent()
                if resp is not None:
                    logging.info('onRemoveContent handler returned a response: <%s>', resp)
                    return resp
                return "ok"
            except Exception as e:
                logging.error('onRemoveContent handler raised an exception: %s', e)
                return "error: %s" % e

        @sio.on('checkStatus')
        def onCheckStatus(msg=None):
            heartbeat = {'status': AppStatus.ONLINE.value, 'appId': self.appID}
            try:
                statusMsg = self.onCheckStatus(msg)
                if statusMsg is not None:
                    heartbeat['status_msg'] = statusMsg
            except Exception as e:
                heartbeat['status'] = AppStatus.PAYLOAD_ERROR.value
                heartbeat['error'] = 'exception: %s' % e
            logging.debug('status check response %s', heartbeat)

            # legacy
            sio.emit('sendStatus', heartbeat)

            # this return value is delivered as the acknowledgement to the
            # server. note: cannot return a tuple. must return a single val
            return heartbeat

        @sio.on('*')
        def onCatchAll(data=None):
            self.onCatchAll(data)

        while self.running:
            if (not self.connected):
                try:
                    sio.connect(self.fullSyncServerURL)
                    self.connected = True
                    sio.wait()
                # TODO(sam): handle the "Already Connected" exception better
                except Exception as e:
                    logging.error('connection error: %s', e)
                    time.sleep(5)
                    pass
            else:
                logging.debug("already connected. looping...")
                time.sleep(5)

The SyncClient has been written generically, so you should be able to copy-paste this into your project with minimal modification. As shown above, the best way to use the SyncClient is to extend the class and override the methods as necessary.

But let’s go into more detail on how this SyncClient works:

The SyncClient uses Socket.io (which is really just a wrapper around websockets) under the hood to connect to the Sync Server. After an instance of the SyncClient has been initialized, .start() connects and registers Socket.io handlers for all of the events emitted by the Sync Server. If the SyncClient has been subclassed, or any handlers have been overridden, those are used instead of the defaults. The default handlers simply logs that they have been hit, along with any data they have received.

Adding the Docker Image

Now that you have the application code ready to go, create a Docker image that can be included in our Helm charts later on.

First, define your requirements:

# bear-echo/app/requirements.txt

python-socketio==5.7.1
python-engineio==4.3.4
websocket-client==1.4.1
requests==2.28.1
Flask==2.2.2

And now, the Dockerfile:

# bear-echo/Dockerfile

FROM python:3.9

WORKDIR /usr/src/app

COPY app/requirements.txt .

RUN apt-get update && \
    apt-get install libsm6 libxext6 -y && \
    pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

COPY app/. .

EXPOSE 11777

CMD ["python", "-u", "main.py"]

At this point, your “bear-echo” project should look like:

├─ bear-echo/
    ├─ Dockerfile
    ├─ app/
        ├─ main.py
        ├─ syncer.py
        ├─ requirements.txt

Packaging application as a Helm Chart

You should now have a Docker image for our new Bear Echo edge application. Now create the Helm charts.

To keep things organized and easy to understand, and because the Sync Server is a separate, but dependent chart, you can take advantage of Helm subcharts. This means you are going to create a new Helm chart for Bear Echo, and then import the Sync Server chart as a dependency.

  1. Create a new Helm chart, and then wipe out the default YAML files:

    helm create chart
    rm -r chart/templates
    
  2. Copy the following files into your new helm chart:

    # bear-echo/chart/Chart.yaml
    
    apiVersion: v2
    name: bear-echo-runtime
    description: A Bear Echo (with Sync Server) example helm chart
    type: application
    version: 0.0.1
    appVersion: "0.0.1"
    
    icon: https://raw.githubusercontent.com/FortAwesome/Font-Awesome/6.1.2/svgs/solid/volume-high.svg
    
    dependencies: 
      - name: sync-server
        version: 0.0.1
        # alias needed to avoid issue with hyphenated key names in values.yaml
        # https://github.com/helm/helm/issues/2192
        alias: syncserver
    
    # bear-echo/chart/values.yaml
    
    syncserver:
      fullnameOverride: "bear-echo-syncserver"
      service:
        port: 8010
    
      config:
        apiKey: ""
        gbSiteID: ""
        gbSiteName: ""
        controlServerURL: ""
    
      image:
        repository: harbor.eticloud.io/gbear-dev/great-bear-sync-server
        pullPolicy: Always
        tag: develop
    
      imagePullSecrets:
        - name: gbear-harbor-pull
    
    
    # for bear-echo helm charts
    
    fullnameOverride: ""
    service:
      port: 11777
    
    config:
      requestPath: "/"
      defaultEchoText: "hi"
      # needs to be base64 encoded json. like `echo -n '{}' | base64`
      gbNodeIDs: "e30="
      appName: "bear-echo-runtime"
    
    image:
      repository: harbor.eticloud.io/gbear-dev/great-bear-bear-echo-runtime
      pullPolicy: Always
      tag: develop
    
    imagePullSecrets:
      - name: gbear-harbor-pull
    
    {{/* bear-echo/chart/templates/_helpers.tpl */}}
    
    {{/*
    Expand the name of the chart.
    */}}
    {{- define "bear-echo.name" -}}
    {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Create a default fully qualified app name.
    We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
    If release name contains chart name it will be used as a full name.
    */}}
    {{- define "bear-echo.fullname" -}}
    {{- if .Values.fullnameOverride }}
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- $name := default .Chart.Name .Values.nameOverride }}
    {{- if contains $name .Release.Name }}
    {{- .Release.Name | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
    {{- end }}
    {{- end }}
    {{- end }}
    
    {{/*
    Create chart name and version as used by the chart label.
    */}}
    {{- define "bear-echo.chart" -}}
    {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Common labels
    */}}
    {{- define "bear-echo.labels" -}}
    helm.sh/chart: {{ include "bear-echo.chart" . }}
    {{ include "bear-echo.selectorLabels" . }}
    {{- if .Chart.AppVersion }}
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
    {{- end }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    {{- end }}
    
    {{/*
    Selector labels
    */}}
    {{- define "bear-echo.selectorLabels" -}}
    app.kubernetes.io/name: {{ include "bear-echo.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    {{- end }}
    
    # bear-echo/chart/templates/all.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "bear-echo.fullname" . }}
      labels:
        {{- include "bear-echo.labels" . | nindent 4 }}
    spec:
      replicas: 1
      selector:
        matchLabels:
          {{- include "bear-echo.selectorLabels" . | nindent 6 }}
      template:
        metadata:
          labels:
            {{- include "bear-echo.selectorLabels" . | nindent 8 }}
        spec:
          {{- with .Values.imagePullSecrets }}
          imagePullSecrets:
            {{- toYaml . | nindent 8 }}
          {{- end }}
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
                - name: HTTP_PORT
                  value: {{ .Values.service.port | quote }}
                - name: BASE_URL
                  value: {{ .Values.config.requestPath | quote }}
                - name: ECHO_TEXT
                  value: {{ .Values.config.defaultEchoText | quote }}
                - name: SYNC_SERVER_HOST
                  value: {{ .Values.syncserver.fullnameOverride | quote }}
                - name: SYNC_SERVER_PORT
                  value: {{ .Values.syncserver.service.port | quote }}
                - name: GB_NODE_ID
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                - name: GB_NODE_IDS
                  value: |-
                                    {{ .Values.config.gbNodeIDs | b64dec }}
                - name: APP_NAME
                  value: {{ .Values.config.appName | quote }}
              ports:
                - name: http
                  containerPort: {{ .Values.service.port }}
                  protocol: TCP
              livenessProbe:
                exec:
                  command:
                  - cat
                  - main.py
                initialDelaySeconds: 10
                periodSeconds: 5
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: {{ include "bear-echo.fullname" . }}
      labels:
        {{- include "bear-echo.labels" . | nindent 4 }}
    spec:
      type: {{ .Values.service.type }}
      ports:
        - port: {{ .Values.service.port }}
          targetPort: http
          protocol: TCP
          name: http
      selector:
        {{- include "bear-echo.selectorLabels" . | nindent 4 }}
    

    You should now have a directory structure like:

    ├─ bear-echo/
        ├─ Dockerfile
        ├─ app/
            ├─ main.py
            ├─ syncer.py
            ├─ requirements.txt
        ├─ chart/
            ├─ Chart.yaml
            ├─ values.yaml
            ├─ templates/
                ├─ _helpers.tpl
                ├─ all.yaml
    
  3. Finally, add the Sync Server subchart. To do this, copy and paste the following simplified YAML files:

    # bear-echo/chart/charts/sync-server/Chart.yaml
    
    apiVersion: v2
    name: sync-server
    description: A Sync Server Helm Chart
    type: application
    version: 0.0.1
    appVersion: "0.0.1"
    
    # bear-echo/chart/charts/sync-server/values.yaml
    
    config:
      apiKey: ""
      gbSiteName: ""
      gbSiteID: ""
      controlServerURL: ""
    
    image:
      repository: harbor.eticloud.io/gbear-dev/great-bear-sync-server
      pullPolicy: Always
      tag: develop
    
    imagePullSecrets:
      - name: gbear-harbor-pull
    
    fullnameOverride: "syncserver"
    
    service:
      type: ClusterIP
      port: 8090
    
    {{/* bear-echo/chart/charts/sync-server/templates/_helpers.tpl */}}
    
    {{/*
    Expand the name of the chart.
    */}}
    {{- define "sync-server.name" -}}
    {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Create a default fully qualified app name.
    We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
    If release name contains chart name it will be used as a full name.
    */}}
    {{- define "sync-server.fullname" -}}
    {{- if .Values.fullnameOverride }}
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- $name := default .Chart.Name .Values.nameOverride }}
    {{- if contains $name .Release.Name }}
    {{- .Release.Name | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
    {{- end }}
    {{- end }}
    {{- end }}
    
    {{/*
    Create chart name and version as used by the chart label.
    */}}
    {{- define "sync-server.chart" -}}
    {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Common labels
    */}}
    {{- define "sync-server.labels" -}}
    helm.sh/chart: {{ include "sync-server.chart" . }}
    {{ include "sync-server.selectorLabels" . }}
    {{- if .Chart.AppVersion }}
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
    {{- end }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    {{- end }}
    
    {{/*
    Selector labels
    */}}
    {{- define "sync-server.selectorLabels" -}}
    app.kubernetes.io/name: {{ include "sync-server.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    {{- end }}
    
    # bear-echo/chart/charts/sync-server/templates/all.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "sync-server.fullname" . }}
      labels:
        {{- include "sync-server.labels" . | nindent 4 }}
    spec:
      replicas: 1
      selector:
        matchLabels:
          {{- include "sync-server.selectorLabels" . | nindent 6 }}
      template:
        metadata:
          labels:
            {{- include "sync-server.selectorLabels" . | nindent 8 }}
        spec:
          {{- with .Values.imagePullSecrets }}
          imagePullSecrets:
            {{- toYaml . | nindent 8 }}
          {{- end }}
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
                - name: API_KEY
                  value: {{ .Values.config.apiKey | quote }}
                - name: PORT_SYNC_SERVER
                  value: {{ .Values.service.port | quote }}
                - name: GB_SITE_NAME
                  value: {{ .Values.config.gbSiteName | quote }}
                - name: GB_SITE_ID
                  value: {{ .Values.config.gbSiteID | quote }}
                - name: CONTROL_SERVER_URL
                  value: {{ .Values.config.controlServerURL | quote }}
              ports:
                - name: http
                  containerPort: {{ .Values.service.port }}
                  protocol: TCP
              livenessProbe:
                exec:
                  command:
                  - cat
                  - app.js
                initialDelaySeconds: 10
                periodSeconds: 5
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: {{ include "sync-server.fullname" . }}
      labels:
        {{- include "sync-server.labels" . | nindent 4 }}
    spec:
      type: {{ .Values.service.type }}
      ports:
        - port: {{ .Values.service.port }}
          targetPort: http
          protocol: TCP
          name: http
      selector:
        {{- include "sync-server.selectorLabels" . | nindent 4 }}
    

    This is the complete directory structure now:

    ├─ bear-echo/
        ├─ Dockerfile
        ├─ app/
            ├─ main.py
            ├─ syncer.py
            ├─ requirements.txt
        ├─ chart/
            ├─ Chart.yaml
            ├─ values.yaml
            ├─ templates/
                ├─ _helpers.tpl
                ├─ all.yaml
            ├─ charts/
                ├─ sync-server/
                    ├─ Chart.yaml
                    ├─ values.yaml
                    ├─ templates/
                        ├─ _helpers.tpl
                        ├─ all.yaml
    

(Optional) Great Bear metadata

You should now have just about everything you need for your new Great Bear application that supports runtime configuration! There’s a bit more metadata that you can add to describe our application in the Great Bear appstore. (For details, see gbear/appmetadata.yaml.)

Here is the kind of metadata you might want for your Bear Echo + Sync Server application.

# bear-echo/chart/gbear/appmetadata.yaml

displayName: Bear Echo (with Sync Server)
description: Helm chart for Bear Echo with Sync Server

icon: https://raw.githubusercontent.com/FortAwesome/Font-Awesome/6.1.2/svgs/solid/volume-high.svg

labels:
   kind: EXAMPLE

tags:
   - sample

configuration:
  - name: defaultEchoText
    key: config.defaultEchoText
    title: "Default Echo Text"
    description: "Text to be Echo'd before Runtime Updates"
    value: "default"
    type: String

  - name: apiKey
    key: syncserver.config.apiKey
    title: "API Key (Deployment Token)"
    description: "API Key generated from app-control dashboard"
    type: Secret

  - name: siteID
    key: syncserver.config.gbSiteID
    value: "{{ .GB_SITE_ID }}"
    type: Runtime

  - name: siteName
    key: syncserver.config.gbSiteName
    value: "{{ .GB_SITE_NAME }}"
    type: Runtime

  - name: controlServerURL
    key: syncserver.config.controlServerURL
    title: "Control Server URL"
    description: "The URL of the Application Control Service"
    value: "https://app-control.dev.gbear.scratch.eticloud.io"
    type: String

  - name: nodeIDs
    key: config.gbNodeIDs
    value: "{{ .GB_NODE_IDS | b64enc }}"
    # actually passed in as json
    type: Runtime
├─ bear-echo/
    ├─ Dockerfile
    ├─ app/
        ├─ main.py
        ├─ syncer.py
        ├─ requirements.txt
    ├─ chart/
        ├─ Chart.yaml
        ├─ values.yaml
        ├─ gbear/
            ├─ appmetadata.yaml
        ├─ templates/
            ├─ _helpers.tpl
            ├─ all.yaml
        ├─ charts/
            ├─ sync-server/
                ├─ Chart.yaml
                ├─ values.yaml
                ├─ templates/
                    ├─ _helpers.tpl
                    ├─ all.yaml

1.7.6 - Deploying an AI Model to Great Bear

This guide gives an example of how to configure the Wedge app to run an AI model on a node using the App Control UI. The guide assumes that you have already deployed Wedge to a site, and have access to the App Control UI. For more information about how to deploy Wedge, see Deployment.

You are going to:

  • run the MobileNet image classification model on a public RTSP stream,
  • use preprocessing to normalize the images extracted from the input video,
  • apply postprocessing to the model output to select the most likely class, and
  • write the resulting inferences to a public MQTT broker.
  • Finally, you connect to the public MQTT broker and subscribe to the real-time inferences.

Steps

  1. Open the App Control UI in your browser:

    https://app-control.greatbear.io

    and then select a target node from the available Wedge Sites.

  2. First you need to provide the URL of an RTSP stream that the Wedge application should connect to. The stream should be accessible by the node at the network level, such as a local Meraki camera feed. In this example you can use a publically available RTSP stream to test your model deployment. Enter the following URL into the Input field:

    rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4

    Since this is a public RTSP video stream provided for testing purposes, it sometimes becomes unavailable. You can verify that it is online using a tool like VLC.

  3. Next, you can provide the URL of an MQTT broker and topic where the Wedge application should publish inferences to. In this example you use the publically available MQTT broker that is provided by HiveMQ. Enter the following URL into the Output field (replacing [unique_topic_name] with a suitable topic):

    mqtt://[unique_topic_name]@broker.hivemq.com:1883

    Please choose and remember a unique topic name for this demo deployment, since the public MQTT broker can have other users connecting to it and publishing messages.

  4. After defining the input and output configuration for the Wedge app, specify the URL of a model that should be downloaded and ran. Wedge can download ONNX and TFLite models from direct links, such as to a model hosted on GitHub, or from an AWS or MinIO S3 bucket. In this example you can use the publically available MobileNet ONNX model, so enter the following in the Model field:

    https://github.com/onnx/models/blob/main/vision/classification/mobilenet/model/mobilenetv2-7.onnx

  5. As the MobileNet model expects input images to be normalized in a standard way, select Preprocess and define the following values:

    • Resizing Height & Width is an important preprocessing step in computer vision and models are often trained with images of a specific size. In order to make the most of your model’s capabilities choose the height and width that match the dimension of the training dataset. For the MobileNet model this is 224 and 224 for both height and width.

    • Mean Normalization is an image preprocessing technique used to standardize RGB data around mean values. It allows you to have different sources of data inside the same range, and, as a result, enhance the performance of model inference. For the MobileNet model the expected mean normalization input is 0.485, 0.456, 0.406

    • Standard Deviation is another technique that allows you to standardize data. The values of the data are centered around the mean with a unit standard deviation. It is a number that describes how spread out the values are. For the MobileNet model the expected standard deviation input is 0.229, 0.224, 0.225

  6. As the MobileNet model outputs scores for each of the 1000 classes of ImageNet, we can apply standard postprocessing to obtain an image classification. Select the Postprocess to calculate the softmax probability scores for each class and then match the most probable to a list of class names. You can specify the class names file to be used in the Classes field. In this example, use:

    https://github.com/onnx/models/blob/main/vision/classification/synset.txt

  7. Once all the values above have been set, your configuration setup should look like the following: Final view of the browser page Final view of the browser page

    In this example, the model and class files are public and can be downloaded from GitHub, however Wedge can also download files from AWS and MinIO S3 buckets. To use this approach, use the following format in the Input and Classname fields: https://[access_key]:[secret_key]]@[host]:[port]/[bucket]/[file]

  8. Finally, select a node on which you wish you deploy your model, and click Update Content. You will see a success message indicating that the node has received the parameters, and in the list view the node’s configuration will have been updated.

  9. To view the results of your deployed model’s predictions, you can:

    • Use the publically available MQTT Client from HiveMQ. To do so, complete the following steps:

      1. Navigate to the MQTT Client in your browser.

      2. Select Connect, then Add New Topic Subscription.

      3. Enter the same [unique_topic_name] that you used when deploying the MobileNet model to the node, then click Subscribe.

        If everything is setup correctly you should see prediction messages appearing in the browser.

        Node new configurations Node new configurations

    • Alternatively, install the Mosquitto package on your local machine, then open a terminal. Run the following command (replacing [unique_topic_name] with the topic used when deploying the MobileNet model to the node) to subscribe to the public broker and see prediction messages appearing in the terminal window:

      mosquitto_sub -h broker.hivemq.com -p 1883 -t [unique_topic_name]

  10. In order to remove the model and all other configuration from a node, select the desired device and click the Remove Content button. The target device will disconnect from the input feed, and stop running the AI model.

1.7.7 - Creating a simple AI App

While the Wedge allows you to run an AI model on a node using the App Control UI, you may wish to create your own AI app that can include custom pre- and post-processing code.

In this tutorial you will create a minimal AI application that:

  • connects to an RTSP stream,
  • runs an ONNX model for classifying the frames, and
  • writes the output to a topic of an MQTT Broker.

The model that we’ve chosen to use for this tutorial is the GoogleNet ONNX model, one of many models trained to classify images based on the 1000 classes of ImageNet.

Prerequisites

This tutorial assumes that:

  • You know the basics of Python and have pip installed.
  • You have Docker installed.
  • You have access to an RSTP video stream that provides the input images.

Create the application

In order to get started, you first need to setup a new directory where you will in turn create and store the tutorial files. So first create a new directory locally and name it minimal_ai.

The main file of the minimal AI application will do the heavy lifting:

  • connecting to an RTSP stream,
  • downloading and running an ONNX model for classifying the frames,
  • matching the inferences to a downloaded set of classes, and
  • writing the class names to an MQTT Broker topic.

It will take in three command line arguments:

  • the URL of the RTSP stream,
  • the URL of the MQTT Broker, and
  • the name of the MQTT topic.

The following is an example of this application. Download or copy it into the minimal_ai directory as minimal_ai_app.py.

# file: minimal_ai/minimal_ai_app.py

import sys
import rtsp
import onnxruntime as ort
import numpy as np
import paho.mqtt.client as mqtt
import requests
from preprocess import preprocess

if __name__ == '__main__':

    # python3 minimal_ai_app.py <url of RTSP stream> <url of MQTT Broker> <MQTT topic>

    if len(sys.argv) != 4:
        raise ValueError("This demo app expects 3 arguments and has %d" % (len(sys.argv) - 1))

    # Load in the command line arguments
    rtsp_stream, mqtt_broker, mqtt_topic = sys.argv[1], sys.argv[2], sys.argv[3]

    # Download the model
    model = requests.get('https://github.com/onnx/models/raw/main/vision/classification/inception_and_googlenet/googlenet/model/googlenet-12.onnx')
    open("model.onnx" , 'wb').write(model.content)
    
    sess_options = ort.SessionOptions()
    sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
    sess_options.optimized_model_filepath = "optimized_model.onnx"

    session = ort.InferenceSession("model.onnx", sess_options)
    inname = [input.name for input in session.get_inputs()]

    # Download the class names
    labels = requests.get('https://raw.githubusercontent.com/onnx/models/main/vision/classification/synset.txt')
    open("synset.txt" , 'wb').write(labels.content)
    with open("synset.txt", 'r') as f:
        labels = [l.rstrip() for l in f]

    # Connect to the MQTT Broker
    mqtt_client = mqtt.Client()
    mqtt_client.connect(mqtt_broker)
    mqtt_client.loop_start()

    # Connect to the RTSP Stream
    rtsp_client = rtsp.Client(rtsp_server_uri = rtsp_stream)
    while rtsp_client.isOpened():
        img = rtsp_client.read()
        if img != None:

            img = preprocess(img)
            preds = session.run(None, {inname[0]: img})
            pred = np.squeeze(preds)
            a = np.argsort(pred)[::-1]
            print(labels[a[0]])
            mqtt_client.publish(mqtt_topic, labels[a[0]])

    rtsp_client.close()
    mqtt_client.disconnect()

This simple application can almost run by itself, except you need to make sure that the input frames are preprocessed in the way that the model expects. In the case of the GoogleNet ONNX model, you can use the preprocess function available online. Therefore, next create the file preprocess.py in the minimal_ai directory, copy the preproccess function, and import numpy at the top of the file:

# file: minimal_ai/preprocess.py
# from https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet#obtain-and-pre-process-image

import numpy as np

# Pre-processing function for ImageNet models using numpy
def preprocess(img):
    '''
    Preprocessing required on the images for inference with mxnet gluon
    The function takes loaded image and returns processed tensor
    '''
    img = np.array(img.resize((224, 224))).astype(np.float32)
    img[:, :, 0] -= 123.68
    img[:, :, 1] -= 116.779
    img[:, :, 2] -= 103.939
    img[:,:,[0,1,2]] = img[:,:,[2,1,0]]
    img = img.transpose((2, 0, 1))
    img = np.expand_dims(img, axis=0)

    return img

Test the application

With the preprocessing file and function added, you can test this python app locally with the following commands, replacing [your-rtsp-stream-url] with the public RTSP stream and [your-mqtt-topic] with a suitable topic:

For this tutorial we have chosen to use the public MQTT broker provided by broker.hivemq.com. Therefore, please choose a unique topic name for your demo setup, since the public MQTT broker can have other users also connecting to it and publishing messages.

# install the required packages
pip install onnxruntime rtsp numpy paho-mqtt requests

# run the demo application
python minimal_ai_app.py [your-rtsp-stream-url] broker.hivemq.com [your-mqtt-topic]

Running this command should download the model and classnames, connect to the RTSP stream, and run the AI model on the frames after applying preprocessing. Based on the model predictions, the most likely image class from ImageNet should be printed in the terminal, and published to the MQTT Broker and topic.

Access the broker in the browser

Since we are using the public MQTT broker from HiveMQ you can view the results of your deployed model’s predictions with the online MQTT Client from HiveMQ. To do so, complete the following steps.

  1. Navigate to the MQTT Client in your browser.
  2. Select Connect, then Add New Topic Subscription.
  3. Enter the same value for [your-mqtt-topic] that you used when running minimal_ai_app.py, then click Subscribe.

Access the broker in the terminal

Alternatively, you can install the Mosquitto package on your local machine, then open a terminal. By running the command below, and replacing [your-mqtt-topic] with the topic used when running minimal_ai_app.py, you will subscribe to the public broker and should see prediction messages appearing in the terminal window.

mosquitto_sub -h broker.hivemq.com -p 1883 -t [your-mqtt-topic]

Create the Docker Image

Now that your simple python application is hopefully running locally, you can containerise it using a Dockerfile. The following Dockerfile example performs the following:

  • defines an image based on python:3.9,
  • runs pip install to install the required packages, and
  • defines the command to run when the container starts. In this case, the command is similar to the one used above to test locally, but the command line arguments are provided by environment variables that will be set when running the container.
  1. Download the following in the minimal_ai directory as a file called Dockerfile:

    # file: minimal_ai/Dockerfile
    
    FROM python:3.9
    
    WORKDIR /usr/src/app
    
    RUN apt-get update && \
        apt-get install ffmpeg libsm6 libxext6 -y
    
    RUN pip install --no-cache-dir --upgrade pip && \
        pip install --no-cache-dir onnxruntime rtsp numpy paho-mqtt requests
    
    COPY *.py .
    
    CMD python -u minimal_ai_app.py $RTSP_STREAM $MQTT_BROKER $MQTT_TOPIC
  2. With the Dockerfile created, you can build a local version of the image and name it minimal-ai-app by running the following command in the minimal_ai directory:

    docker build -t minimal-ai-app .
    
  3. Once the image has been created, you can run it locally using docker to check that everything was defined correctly. The following command runs the minimal_ai image (replace [your-rtsp-stream-url] with the public RTSP stream and [your-mqtt-topic] with a suitable topic).

    Please choose a unique topic name for your demo setup, since the public MQTT broker can have other users also connecting to it and publishing messages.

    docker run -d \
        -e RTSP_STREAM=[your-rtsp-stream-url] \
        -e MQTT_BROKER=broker.hivemq.com \
        -e MQTT_TOPIC=[your-mqtt-topic] \
        --name minimal-ai-container minimal-ai-app
    

Create the Great Bear Application Package (GBAP)

  1. With the Dockerfile created, you can now move onto creating the GBAP, used to publish your application to the Great Bear Application Store for subsequent deployment to Great Bear managed sites through the Great Bear Dashboard.

  2. Follow the steps to install and configure the Great Bear Packaging SDK tool.

  3. Use the SDK tool to create a new GBAP instance within your minimal_ai application directory.

    gbear app create minimal_ai/GBAP
    

    Enter the chart name, name of the application, and the description of the application when prompted:

    Creating a GBAP with required fields....
    Enter the chart name:
    minimal-ai-app
    Enter the chart version [SemVer2 MAJOR.MINOR.PATCH], (leave empty for default: 0.0.1):
    
    Enter the application name:
    Minimal AI App
    Enter the application description:
    A minimal AI application running on Great Bear
    Successfully created a GBAP in path minimal_ai/GBAP
    
  4. The SDK tool has now created a boiler plate GBAP filesystem tree:

    minimal_ai/GBAP
    ├── Chart.yaml
    ├── gbear
    │   └── appmetadata.yaml
    

Author the GBAP Assets

The GBAP contains the following two fundamental elements:

  1. A Helm Chart to describe a set of Kubernetes resources used to deploy the application at Great Bear managed sites.
  2. Great Bear specific application metadata (gbear/appmetadata), containing:
    • Properties used to render the application in the Great Bear Application Store
    • Properties used to define application parameters which can be overridden when deploying the Helm Chart.

For further details on the GBAP structure, see Great Bear Application Package Guide

Extend the GBAP Helm Chart

The final things that you need to build are the elements of the Helm Chart for deploying the container image to Great Bear sites as an application. Helm Charts describe Kubernetes applications and their components: rather than creating YAML files for every application, you can provide a Helm chart and use Helm to deploy the application for you. Therefore, the next steps are to create a very basic Helm Chart that will contain a template for the Kubernetes resource that will form your application, and a values file to populate the template placeholder values.

A Chart.yaml file is required for any Helm Chart, and contains high level information about the application (you can find out more in the Helm Documentation). You can set a number of Great Bear specific parameters in the Helm chart of your application.

  1. The Packaging SDK has created the root Chart file Charts.yaml within the GBAP filesystem tree.

    apiVersion: v2
    name: minimal-ai-app
    version: 0.0.1
    
  2. The next step is to create a directory called templates inside the GBAP directory, this is where you will create the template file for our application. When using Helm to install a chart to Kubernetes, the template rendering engine populates the files in the templates directory with the desired values for the deployment.

  3. In the new templates directory, create a file called _helpers.tpl, which will hold some template helpers that you can re-use throughout the chart. Your _helpers.tpl file should contain the following helpers:

    # file: minimal_ai/chart/templates/_helpers.tpl
    
    {{/*
    Expand the name of the chart.
    */}}
    {{- define "app.name" -}}
    {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Create a default fully qualified app name.
    We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
    If release name contains chart name it will be used as a full name.
    */}}
    {{- define "app.fullname" -}}
    {{- if .Values.fullnameOverride }}
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- $name := default .Chart.Name .Values.nameOverride }}
    {{- if contains $name .Release.Name }}
    {{- .Release.Name | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
    {{- end }}
    {{- end }}
    {{- end }}
    
    {{/*
    Create chart name and version as used by the chart label.
    */}}
    {{- define "app.chart" -}}
    {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Common labels
    */}}
    {{- define "app.labels" -}}
    helm.sh/chart: {{ include "app.chart" . }}
    {{ include "app.selectorLabels" . }}
    {{- if .Chart.AppVersion }}
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
    {{- end }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    {{- end }}
    
    {{/*
    Selector labels
    */}}
    {{- define "app.selectorLabels" -}}
    app.kubernetes.io/name: {{ include "app.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    {{- end }}
    
    {{/*
    Create the name of the service account to use
    */}}
    {{- define "app.serviceAccountName" -}}
    {{- if .Values.serviceAccount.create }}
    {{- default (include "app.fullname" .) .Values.serviceAccount.name }}
    {{- else }}
    {{- default "default" .Values.serviceAccount.name }}
    {{- end }}
    {{- end }}
    
  4. The next file you need in the templates directory is configmap.yaml. ConfigMaps can be used to store key-value data that application pods can then consume as environment variables, command-line arguments, or as configuration files in a volume.

    Inside the templates directory create a file called configmap.yaml, with the following content:

    # file: minimal_ai/chart/templates/configmap.yaml
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: {{ .Values.name }}
    data:
      RTSP_STREAM: |
            {{ .Values.config.RTSP_STREAM }}
      MQTT_BROKER: |
            {{ .Values.config.MQTT_BROKER }}
      MQTT_TOPIC: |
            {{ .Values.config.MQTT_TOPIC }}
  5. The final file that you need to create inside the templates directory is minimal-ai-app-deployment.yaml, this is where you will create the template file for the application and describe the applications desired state.

    Create the file minimal-ai-app-deployment.yaml inside the templates directory, with the following content:

    # file: minimal_ai/chart/templates/minimal-ai-app-deployment.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "app.fullname" . }}
      labels:
        {{- include "app.labels" . | nindent 4 }}
    spec:
      replicas: {{ .Values.replicaCount }}
      selector:
        matchLabels:
          {{- include "app.selectorLabels" . | nindent 6 }}
      template:
        metadata:
          labels:
            {{- include "app.selectorLabels" . | nindent 8 }}
        spec:
          {{- with .Values.imagePullSecrets }}
          imagePullSecrets:
            {{- toYaml . | nindent 8 }}
          {{- end }}
          containers:
            - name: {{ .Values.name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
                - name: RTSP_STREAM
                  valueFrom:
                    configMapKeyRef:
                      name: {{ .Values.name }}
                      key: "RTSP_STREAM"
                - name: MQTT_BROKER
                  valueFrom:
                    configMapKeyRef:
                      name: {{ .Values.name }}
                      key: "MQTT_BROKER"
                - name: MQTT_TOPIC
                  valueFrom:
                    configMapKeyRef:
                      name: {{ .Values.name }}
                      key: "MQTT_TOPIC"
          restartPolicy: Always
    

    The parts of the deployment file that are enclosed in {{ and }} blocks, such as {{ .Values.name }}, are called template directives. The template directives will be populated by the template rendering engine, and in this case look for information from the values.yaml file - which contains the default values for a chart.

  6. The final Helm component that you have to create is the values.yaml file, which you should create in the chart directory. Inside the values.yaml file you need to define default the values for the template directives in the deployment file. It also defines the image repository location, and you should make sure to replace <tenant-id> in the repository value to the tenant ID communicated to you by Cisco. Copy the following content into the file:

    # file: minimal_ai/chart/values.yaml
    
    replicaCount: 1
    
    name: "minimal-ai"
    
    image:
        repository: repo.greatbear.io/{{< param "tenantId" >}}/minimal-ai
        tag: 0.0.1
        pullPolicy: Always
    
    imagePullSecrets:
      - name: gbear-harbor-pull
    
    nameOverride: ""
    fullnameOverride: "minimal-ai"
    
    # Config parameters described in application metadata
    # and their values will be put here by the GB-deployer
    config:
      # The url of the RTSP stream
      RTSP_STREAM: ""
      # The host of the MQTT Broker
      MQTT_BROKER: ""
      # The topic for inferences
      MQTT_TOPIC: ""
    

Customise the Great Bear Application Metadata

  1. The SDK tool has generated the following boiler plate gbear/appmetadata.yaml file, the application metadata file to include the following properties, allowing the user to set their own values for RTSP_STREAM, MQTT_BROKER and MQTT_TOPIC when they deploy the app via the Application Store.

    name: Minimal AI App
    description: A minimal AI application running on Great Bear
    configuration:
      - name: RTSP_STREAM
        title: The url of the RTSP stream
        type: String    
      - name: MQTT_BROKER
        title: The host of the MQTT Broker
        type: String
      - name: MQTT_TOPIC
        title: The topic for inferences
        type: String

Validate the Application Package

By following these steps you should now have a complete application, including the python code that connects to an RTSP stream, runs an ONNX model for classifying the frames, and writes the output to a topic of an MQTT Broker, along with the associated Dockerfile and Helm Chart.

  1. Before moving on to building and uploading the docker image and publishing the Helm Chart to Great Bear, you can first test that everything is setup correctly using the command:

    helm install minimal-ai GBAP/ --dry-run --debug
    
  2. Now that you have finished authoring our GBAP assets and tested the embedded Helm Chart works, use the SDK tool to perform a final validation of the complete GBAP contents:

    gbear app validate minimal-ai/GBAP
    

    Expected output:

    Validating Package: charts/
    -- Validating Application Metadata...
    -- Validating Chart Dependencies...
    -- Validating Kubernetes Resource Labels...
    Validation Complete:
    Application Metadata: Passed (Errors: 0, Warnings: 0)
    Chart Dependencies: Passed (Errors: 0, Warnings: 0)
    Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
    

For further details on GBAP validation, see Validate an Application Package

Assuming that the command above didn’t give errors, you can move onto uploading the Minimal AI app to Great Bear by Publish an Application Package.

1.7.8 - Developing and Publishing a Flame App for Great Bear

This guide gives an example of how to develop the Flame application for Great Bear. Flame is an open source platform from Cisco that enables developers to compose and deploy Federated Learning training workloads easily.

By building a Flame app and running at the edge on Great Bear, you can:

  • train global machine learning models
  • keep data at the edge site, and
  • preserve privacy.

The Flame app itself runs as a ‘Flamelet’ in non-orchestration mode, which means that it reaches out to the Flame Control Plane and registers itself for a Federated Learning task.

In this tutorial you are going to clone the Flame open source repository, and build the associated docker image. You will then package the docker image with a Helm chart and add the relevant data to upload and run this as a Great Bear application.

Prerequisites

This guide assumes that:

  • You either have access to an already deployed instance of the Flame Control Plane running in the cloud, or that you have deployed your own Flame Control Plane.
  • You are familiar with using docker and creating docker images, creating Helm charts for Kubernetes, and deploying applications to Kubernetes clusters using Helm charts.
  • You have docker and helm installed.
  • You have the required access credentials for Great Bear. For more information on the development process for Great Bear, see Creating, Publishing and Deleting Applications.

Build and upload the Flame docker image

  1. First step is to clone the Flame open source repository:

    git clone https://github.com/cisco-open/flame.git
    
  2. Once the repository has been cloned, navigate into the fiab directory inside flame:

    cd flame/fiab
    
  3. Next, build the Flame docker image with the command:

    ./build-image.sh
    
  4. After the docker image is built, check that the image has been created by running:

    docker images | head -2
    

    Which should give output similar to the following:

    REPOSITORY   TAG      IMAGE ID       CREATED        SIZE
    flame        latest   2b53cc1e6644   1 minute ago   4.27GB
    
  5. In order to upload this image to Great Bear, you first need to tag it by running following command, replacing <tenant-id> with the Application Store Tenant ID provided to you by Cisco:

    docker tag flame repo.greatbear.io/<tenant-id>/flame:0.0.1
    
  6. Log in to the Great Bear system using docker. To do this, run the following command, replacing , and with the credentials provided to you by Cisco:

    docker login --username=<tenant-registry-username> --password=<tenant-registry-password> <tenant-registry-url>
    

    Alternatively, you can exclude the username and password from this command, and enter them when prompted.

  7. Push this image to the Great Bear repository. Run the following command, replacing , <application-name>, and <version-number> with the values used in the previous step:

    docker push <tenant-registry-url>/<application-name>:<version-number>
    

    In this case, use flame:0.0.1 for the application name and version.

Create the Great Bear Application Package (GBAP)

  1. With the Dockerfile created, you can now move onto creating the GBAP, used to publish our application to the Great Bear Application Store for subsequent deployment to Great Bear managed sites through the Great Bear Dashboard.

  2. Follow the steps to install and configure the Great Bear Packaging SDK tool.

  3. In a location of your choice create a new directory called flame-app where you will create and save the GBAP.

  4. Use the SDK tool to create a new GBAP instance within your flame-app application directory.

    gbear app create flame-app/GBAP
    

    Enter the chart name, name of the application, and the description of the application when prompted:

    Creating a GBAP with required fields....
    Enter the chart name:
    flame-app
    Enter the chart version [SemVer2 MAJOR.MINOR.PATCH], (leave empty for default: 0.0.1):
    
    Enter the application name:
    Flame App
    Enter the application description:
    Flame is a platform that enables developers to compose and deploy Federated Learning training workloads easily.
    Successfully created a GBAP in path flame-app/GBAP
    
  5. The SDK tool has now created a boiler plate GBAP filesystem tree:

    flame-app/GBAP
    ├── Chart.yaml
    ├── gbear
    │   └── appmetadata.yaml
    

Author the GBAP Assets

The GBAP contains the following two fundamental elements:

  1. A Helm Chart to describe a set of Kubernetes resources used to deploy the application at Great Bear managed sites.
  2. Great Bear specific application metadata (gbear/appmetadata), containing:
    • Properties used to render the application in the Great Bear Application Store
    • Properties used to define application parameters which can be overridden when deploying the Helm Chart.

For further details on the GBAP structure, see Great Bear Application Package Guide

Extend the GBAP Helm Chart

The final things that you need to build are the elements of the Helm Chart for deploying the container image to Great Bear sites as an application. Helm Charts describe Kubernetes applications and their components: rather than creating YAML files for every application, you can provide a Helm chart and use Helm to deploy the application for you. Therefore, the next steps are to create a very basic Helm Chart that will contain a template for the Kubernetes resource that will form your application, and a values file to populate the template placeholder values.

A Chart.yaml file is required for any Helm Chart, and contains high level information about the application (you can find out more in the Helm Documentation). You can set a number of Great Bear specific parameters in the Helm chart of your application.

  1. The Packaging SDK has created the root Chart file Charts.yaml within the GBAP filesystem tree.

    apiVersion: v2
    name: flame-app
    version: 0.0.1
    

Create the Helm values file

  1. Inside the flame-app/GBAP directory, create a file called values.yaml. This file contains the default values for our Flame application. It also defines the data needed for the application to pull the image, such as repository location and pull secrets. When creating this file, replace < tenant-registry-url > in the repository value with the URL provided to you by Cisco.

    # flame-app/values.yaml
    
    replicaCount: 1
    
    config:
      CONTROL_PLANE_API_SERVER_URL: ""
      CONTROL_PLANE_NOTIFIER_URL: ""
      FLAME_TASK_ID: ""
      FLAME_TASK_KEY: ""
    
    image:
      repository: < tenant-registry-url >/flame
      pullPolicy: Always
      tag: 0.0.1
    
    imagePullSecrets:
      - name: gbear-harbor-pull
    
    nameOverride: ""
    fullnameOverride: "flame-app"
    

Create the Helm template files

The next step is to create the template files that will:

  • create your Kubernetes deployment
  • the service endpoint for the deployment, and
  • some template helpers that are re-used throughout the chart.

Complete the following steps.

  1. Inside the flame-app/GBAP directory, create a new directory called templates

  2. In the new templates directory, create a file called _helpers.tpl, which will hold some template helpers that you can re-use throughout the chart. Your _helpers.tpl file should contain the following helpers:

    # flame-app/templates/_helpers.tpl
    
    {{/*
    Expand the name of the chart.
    */}}
    {{- define "app.name" -}}
    {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Create a default fully qualified app name.
    We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
    If release name contains chart name it will be used as a full name.
    */}}
    {{- define "app.fullname" -}}
    {{- if .Values.fullnameOverride }}
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- $name := default .Chart.Name .Values.nameOverride }}
    {{- if contains $name .Release.Name }}
    {{- .Release.Name | trunc 63 | trimSuffix "-" }}
    {{- else }}
    {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
    {{- end }}
    {{- end }}
    {{- end }}  
    
    {{/*
    Create chart name and version as used by the chart label.
    */}}
    {{- define "app.chart" -}}
    {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
    {{- end }}
    
    {{/*
    Common labels
    */}}
    {{- define "app.labels" -}}
    helm.sh/chart: {{ include "app.chart" . }}
    {{ include "app.selectorLabels" . }}
    {{- if .Chart.AppVersion }}
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
    {{- end }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    {{- end }}
    
    {{/*
    Selector labels
    */}}
    {{- define "app.selectorLabels" -}}
    app.kubernetes.io/name: {{ include "app.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    {{- end }}
    
    {{/*
    Create the name of the service account to use
    */}}
    {{- define "app.serviceAccountName" -}}
    {{- if .Values.serviceAccount.create }}
    {{- default (include "app.fullname" .) .Values.serviceAccount.name }}
    {{- else }}
    {{- default "default" .Values.serviceAccount.name }}
    {{- end }}
    {{- end }}
    
  3. In the templates directory, create the deployment.yaml file, which provides a basic manifest for creating the Kubernetes deployment your Flame application. It also sets the FLAME_TASK_ID and FLAME_TASK_KEY environment variables to the user-defined input when deploying the application, and runs the command to start the Flame app in non-orchestration mode with CONTROL_PLANE_API_SERVER_URL and CONTROL_PLANE_NOTIFIER_URL as arguments. It also defines and mounts a volume where the data from the Flame training can be placed. Therefore, in this case, the deployment.yaml should contain:

    # flame-app/deployment.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "app.fullname" . }}
      labels:
        {{- include "app.labels" . | nindent 4 }}
    spec:
      replicas: 1
      selector:
        matchLabels:
          {{- include "app.selectorLabels" . | nindent 6 }}
      template:
        metadata:
          labels:
            {{- include "app.selectorLabels" . | nindent 8 }}
        spec:
          {{- with .Values.imagePullSecrets }}
          imagePullSecrets:
            {{- toYaml . | nindent 8 }}
          {{- end }}
          containers:
            - name: {{ .Chart.Name }}
              volumeMounts:
                - mountPath: /flame/data
                  name: volume-flamelet
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: Always
              env:
                - name: FLAME_TASK_ID
                  value: {{ .Values.config.FLAME_TASK_ID }}
                - name: FLAME_TASK_KEY
                  value: {{ .Values.config.FLAME_TASK_KEY }}
              command: ["/usr/bin/flamelet", "-a", "{{ .Values.config.CONTROL_PLANE_API_SERVER_URL }}", "-n", "{{ .Values.config.CONTROL_PLANE_NOTIFIER_URL }}", "--insecure"]
          volumes:
            - name: volume-flamelet
              hostPath:
                path: /home/vagrant/data
          restartPolicy: Always
    status: {}
    

Customise the Great Bear Application Metadata

  1. The SDK tool has generated the following boiler plate gbear/appmetadata.yaml file, the application metadata file to include the following properties, allowing the user to set their own values for CONTROL_PLANE_API_SERVER_URL, CONTROL_PLANE_NOTIFIER_URL, FLAME_TASK_ID and FLAME_TASK_KEY when they deploy the app via the Application Store.

    title: Flame App 
    description: Flame is a platform that enables developers to compose and deploy Federated Learning training workloads easily.
    icon: https://github.com/cisco-open/flame/raw/main/docs/images/logo.png
    configuration:
      - name: CONTROL_PLANE_API_SERVER_URL
        title: The URL or IP address to connect to the API Server component in the Flame Control Plane
        type: String
      - name: CONTROL_PLANE_NOTIFIER_URL
        title: The URL or IP address to connect to the Notifier component in the Flame Control Plane (Cloud Cluster)
        type: String
      - name: FLAME_TASK_ID
        title: A Task ID is linked to a Job ID and is generated when a job has been started
        type: String
      - name: FLAME_TASK_KEY
        title: A Task Key is an ID that the user can chose when deploying the flame app
        type: String
    

Validate the Application Package

  1. By now you should now have a complete application, and you can test that everything is setup correctly by running the following two commands in the flame-app/GBAP directory:

    helm lint
    
    helm install flame-app . --dry-run --debug
    

    Assuming everything is setup correctly and the previous helm commands do not report any errors, you can now move onto validate the complete GBAP structure.

  2. Now that we have finished authoring our GBAP assets and tested the embedded Helm Chart works, we can use the SDK tool to perform a final validation of the complete GBAP contents:

    $ gbear app validate flame-app/GBAP
    Validating Package: charts/
    -- Validating Application Metadata...
    -- Validating Chart Dependencies...
    -- Validating Kubernetes Resource Labels...
    Validation Complete:
    Application Metadata: Passed (Errors: 0, Warnings: 0)
    Chart Dependencies: Passed (Errors: 0, Warnings: 0)
    Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
    

For further details on GBAP validation, see Validate an Application Package.

Publish to the Great Bear Application Store

Assuming that the command above didn’t give errors, you can move onto publishing the flame app to Great Bear by Publish an Application Package.

Once published, the Flame application will be available in the Application Store:

Flame App Store Flame App Store

1.8 - Reference

This section contains various types of references which are best used when you have already learned the basics of using Great Bear.

  • The API Reference Guide gives an overview about the use of the Great Bear API and offers a comprehensive list of available API endpoints.
  • The Great Bear Application Package Guide gives an overview about the structure of a Great Bear Application Package and the specification of the metadata files.

1.8.1 - API Reference Guide

The Great Bear Public API can be accessed via the following endpoint:

https://api.greatbear.io/api/v1/

In order to use the API you will need to specify an API key in the x-id-api-key HTTP header.

To access the API key, login to the Great Bear Dashboard, select the top right Profile icon, then click Copy API Key.

The Profile menu

As an example, the command below shows how to GET the sites managed by Great Bear via the API:

curl -H "x-id-api-key: ${GBEAR_API_KEY}" https://api.greatbear.io/api/v1/sites

The following API endpoints are available:

1.8.1.1 - Location field of a Site

The location field of Site is a generic JSON-formatted object with the following fields. The location field, and its internal fields, are all optional. Omit them if not needed.

Site location schema

Field Description Type Example
placeName Full name or address of the place string Route De Barneville Carteret, 50580 Portbail, France
city City of the place string Portbail
country Country of the place string France
latitude Latitude of the place float 49.351531
longitude Longitude of the place float -1.71688

All fields are optional.

Samples

The following are all valid examples.

"location": {
    "city": "Portbail",
    "country": "France",
    "latitude": 49.351531,
    "longitude": -1.71688,
    "placeName": "Route De Barneville Carteret, 50580 Portbail, France"
},
"location": {
    "placeName": "My place",
    "latitude": 39.3812,
    "longitude": -97.9222,
}
"location": {
    "placeName": "My place",
    "city": "Roma",
    "country": "Italia",
    "latitude": 41.9,
    "longitude": 12.5,
}
"location": {
    "placeName": "My place",
    "city": "Roma",
    "country": "Italia",
}
"location": {
    "placeName": "My place",
    "city": "Roma",
}
"location": {
    "placeName": "My place",
    "country": "Italia",
}
"location": {
    "latitude": 39.3812,
    "longitude": -97.9222
}

1.8.2 - Great Bear Application Package Guide

Great Bear Application Packages are an extended version of regular Helm charts commonly used for describing a Kubernetes-native application.

A Great Bear application uses a combination of metadata from the standard Helm Chart.yaml file and the Great Bear specific description file called appmetadata.yaml contained in the gbear directory.

The best practices of getting started with creating an application package are described in the Create an Application Package section, while this section focuses on the syntax, semantics, and options of the packaging format.

Package structure

A simple Great Bear Application Package consists of the following files:

myPath
├── Chart.yaml
├── gbear
│   └── appmetadata.yaml
├── templates
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── service.yaml
└── values.yaml

The most important files within an application package are the following:

File name Description
Chart.yaml The basic application metadata in the standard Helm format.
gbear/appmetadata.yaml Great Bear specific metadata.
templates/ Directory of standard Helm templates for the Kubernetes resources that define your application.
charts/ Directory of Helm charts that are used as embedded dependencies.
values.yaml The default configuration values (as in standard Helm format).

Helm best practices to include a README.md and LICENSE file to the package’s root apply, but Great Bear ignores these files.

Chart.yaml

The Chart.yaml file is a mandatory part of Great Bear Application packages. The following fields are required:

Field name Description Example
apiVersion Always v2 v2
name The name of the Great Bear Application Package (the middle section from the application identifier constructed like <tenant-name>/<application-name>/<version>. my-chart
version The version number of the Application Package. Versions must follow the SemVer 2 specification. 0.0.1
description Short description of the application. This is shown in the Application Store if no description is provided in the appmetadata.yaml. Demo application.
dependencies List of other charts to install as part of this package. []

For details, see the Helm Documentation.

gbear/appmetadata.yaml

The other mandatory part of a Great Bear Application Package is the appmetadata.yaml file within the gbear folder. This file describes the Great Bear specific properties of the application.

Required fields

  • displayname: Display name of the application in the application store.
  • description: Description of application offer in the application store.

For example:

displayname: Bear Echo
description: A simple HTTP echo service, based on Python Flask.

App metadata in the Application Store App metadata in the Application Store

Optional fields

  • tags: list of free-form tags used by the Great Bear Application Store to group available applications. For example: [machine learning, image processing].

  • architectures: List of architectures supported like [amd64, arm64v8].

  • icon: URL of icon to use in the Application Store. The URL should point to a publicly available 1:1 aspect ratio PNG file. For example:

    title: Bear Echo
    description: A simple HTTP echo service, based on Python Flask.
    icon: https://raw.githubusercontent.com/FortAwesome/Font-Awesome/5.15.4/svgs/solid/tools.svg 
    
  • configuration: List of configuration fields that are supplied to the application when it’s deployed. Each field is defined with an object (dictionary). For details, see User-configurable parameters.

User-configurable parameters

You can request the user to provide parameters for your application when deploying the application from the Application Store. For example, that way you can request an authentication token for the application.

Use the configuration field of the gbear/appmetadata.yaml file to specify the parameters that the user can configure when deploying your application. For example:

...
configuration:
  - name: MY_CONFIGURATION_PARAMETER
    title: This label is displayed to the user
    type: String

Each entry in the configuration list can contain the following keys:

  • name: The identifier of the field.
  • key: A dot-separated name where the value of this field will be added to the Helm values. Like postgres.image.tag. Defaults to config.{name} if omitted.
  • title: The label of the parameter to be displayed on the Great Bear dashboard when deploying the application.
  • description: The description of the parameter to be displayed on the Great Bear dashboard when deploying the application.
  • value: The default value of the parameter, the type depends on the correlating type field below. For example: a number for NodePort value: 31777, or a string for other types.
  • type: The type of the parameter. The type determines how the parameter input looks on the Great Bear UI. For details, see Parameter types. Default value: string.
  • choices: List of options when the field is rendered as a toggle or a dropdown. For details, see Parameter types.

The following example shows a NodePort input parameter with a default value:

...
configuration:
  - name: ACCESS_PORT
    title: Access HTTP port (30000-32767)
    type: NodePort
    value: 31777

Configuration parameter types

You can use the type and choices fields of a user-configurable parameter to determine how the input of the parameter will look on the Great Bear UI. The type and choices fields are mutually exclusive, so only one of them can be used for a parameter.

type has the following possible values:

  • String: A simple string, rendered as a text box. This is the default value.

  • Secret: A string, rendered as a password input, for example:

    Sample password input Sample password input

  • NodePort: A port number in the 30000-32767 range.

  • Runtime: A “hidden” field which is computed during deployment based on the Go Template specified in the value field. No form field is displayed and no user input is possible, however, the template can use other field’s values and a few contextual variables injected during deployment. For details, see Runtime fields.

To display a dropdown, use the choices field instead of type, and list the options to display. If you list only the “on” and “off” options, a toggle button is displayed (choices: ["on", "off"]), for example:

Sample toggle button Sample toggle button

The possible options should be YAML 1.1 strings, which can be easily ensured if you enclose each value in double quotes.

The following example renders a dropdown with three options, and a toggle button.

...
configuration:
  - name: EXAMPLE_DROPDOWN
    title: Example dropdown
    choices:
    - "Option 1"
    - "Option 2"
    - "Option 3"
  - name: EXAMPLE_TOGGLE
    title: Example toggle
    choices:
    - "on"
    - "off"

Runtime configuration fields

The Runtime configuration field type is a powerful tool for preparing a Helm values structure in a way that’s required by the Helm charts used, especially if you work with an existing Helm chart.

The value within the field definition is parsed at the time of deployment as a Go Template, which is the templating language used by Helm charts. The templates have access to the other user-supplied values of the other input fields, as well as to several special variables, which contain deployment information. Just like in Helm, you can also use the Sprig template functions.

The following special variables are available:

  • .GB_SITE_ID: Contains the Great Bear site ID the application is deployed to.
  • .GB_SITE_NAME: Contains the descriptive name of the Great Bear site the application is deployed to (name substituted at the time of deployment).
  • .GB_NODE_IDS: A string that contains a JSON-serialized object (also known as a dictionary), where the keys are the node names as seen by Kubernetes, and the values are the corresponding node IDs in Great Bear.
  • .GB_NODE_NAMES: A string that contains a JSON-serialized object (also known as a dictionary), where the keys are the node names as seen by Kubernetes, and the values are the corresponding descriptive node names in Great Bear (at the time of deployment).

Example

The following example configuration shows how you can use the special variables and other fields in Runtime configuration field templates. Imagine that you create a Great Bear application from two existing Helm charts, namely service1 and service2. They need to access each other with a shared credential, but the charts expect those in two different formats: service 1 wants two separate fields, while service 2 needs a : separated string with the credential. We decided to use the Site ID as a user name and an arbitrary password from the user. The configuration section of our appmetadata.yaml file looks like the following:

configuration:
  - name: PASSWORD
    title: Password
    description: Shared secret for authentication between the two service.
    type: Secret
    key: service1.password

  - name: CHART1_USER
    type: Runtime
    key: service1.username
    value: "{{ .GB_SITE_ID }}"

  - name: CHART2_SECRET
    type: Runtime
    key: service2.secret
    value: "{{ .GB_SITE_ID }}:{{ .PASSWORD }}"

If the user inputs the following data at the time of deployment:

PASSWORD: "secure"

A values.yaml structure equivalent with the one below will be passed to the application deployment on site xxx-yyy:

service1:
  username: "xxx-yyy"
  password: "secure"
service2:
  secret: "xxx-yyy:secure"

1.9 - Resources

1.9.1 - Getting Support

We really value any feedback that you have on how we can improve Great Bear. Feel free to contact the Great Bear Support Team with any suggestions that you have or bugs that you encounter - we’d love to hear from you!

If at some point you have an issue that’s not covered in this documentation, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

1.10 - Troubleshooting

This chapter gives you tips and pointers on troubleshooting your Great Bear nodes, Sites and Applications.

1.10.1 - Node Errors

The following tips and commands can help you to troubleshoot problems with your Great Bear nodes.

  1. Check the status of the site and the status of the node. If the node is in the ERROR state, try rebooting the node.

  2. Make sure that the node is connected to the Internet.

    The node must be able to access the Internet on TCP port 443.

  3. Make sure the node is bootstrapped and runs without errors. To display the Great Bear agent logs of the node, open a terminal on the node, run the following command, and check the output for errors:

    sudo journalctl -fu agent
    
  4. For a more verbose output, you can modify the log level. This and other variables are set in the environment file for the agent:

    /etc/systemd/system/agent.service.env
    

    In this file amend the line:

    LOG_LEVEL=INFO
    

    To:

    LOG_LEVEL=DEBUG
    

    And then restart the agent:

    sudo systemctl restart agent
    

If these steps do not help resolve your issue, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

1.10.2 - Site Errors

The following tips and commands can help you to troubleshoot problems with your Great Bear sites.

  1. Check the status of the site and the status of the site nodes. If any node is in the ERROR state, try rebooting the node.

If these steps do not help resolve your issue, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

1.10.3 - Application Errors

Once applications are deployed to Great Bear sites, they can appear in the dashboard in various states:

Icon Message Description Code
No status received The application deployment intent has been registered, but no status is yet received. Wait for the application to be fully deployed and operational. NO_STATUS
Running Application deployment is properly running. RUNNING
Application deployment is initializing An operation is currently being performed on the application deployment. Wait until it’s finished. IN_PROGRESS
Application deployment reports problems Application deployment reports problems. Check its configuration and redeploy the app. ERROR
Application deployment failed Application wasn’t deployed. Check its configuration and redeploy the app. FAILED
Lost contact with application deployment Check the network configuration of the app. DORMANT

Since the applications are being deployed onto edge nodes, which can sometimes have low internet connectivity, the deployment process can sometimes take a long time.

On the Great Bear dashboard, select the site, then the application to open the Application Overview page, where you can see more detailed information on the deployment status, properties, and parameters of the application.

Application in error mode Application in error mode

The following tips and steps can help you to troubleshoot problems with applications deployed to sites:

  1. In the case of No Status Received, it is often best to wait for the deployment to start and try to complete. If the application has been in this state for a long time, then try to delete and then deploy the application again.

  2. In the case of an Error, the application has encountered a serious error which is preventing it from running. Firstly check the configuration of the deployed application, then try to delete and then deploy the application again.

  3. In the case of a Failed state, the deployment failed. Firstly check the configuration of the deployment then try to delete and then deploy the application again.

  4. In the case of a Dormant state, the application deployment cannot be reached. Check the status of the site and the status of the site nodes. If the node is in the ERROR state, try rebooting the node.

  5. In the case of an In Progress state, Great Bear is currently executing an action on the application deployment. There’s no need to take any action.

  6. In the case of a Running state, but the application is behaving in an unexpected way or has become unresponsive, you can first try to redeploy the application. If that is unsuccessful then try to delete and then deploy the application again.

If these steps do not help resolve your issue, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

1.10.4 - Raspberry Pi

In case you encounter problems when running Great Bear on a Raspberry Pi, check the following sections.

Check operating system version

Make sure that you have installed one of the supported operating systems on your Raspberry. To display the operating system, run:

(cat /etc/*-release |grep DESCR ) ; uname -a ; uname -m

The last line of the output should include aarch64.

Ensure each host has a unique name

Each node in Great Bear must have a unique name. If this isn’t the case then a variety of network related issues may appear.

To change the hostname to a unique name:

sudo nano /etc/hostname

Change the name and save your changes.

Note: Changing the hostname takes effect only after rebooting the Raspberry Pi.

Bad display resolution

xrandr can show incorrect resolution on some RPI4 images due to overscan enabled by default. If your screen resolution seems to be incorrect, add the following line to the /boot/firmware/usercfg.txt file, then reboot the system.

# Disable compensation for displays with overscan
disable_overscan=1

If these steps do not help resolve your issue, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

1.10.5 - Packaging SDK Errors

Troubleshoot specific errors raised from the Packaging SDK tool operating against your Great Bear Application Package (GBAP).

GBAP Validation Errors

The in-line validation error messages of the tool seek to provide sufficient insight to resolve the related error. This document augments specific error messages with examples to help understand and resolve the source error.

Great Bear Application Metadata Errors

The packaging SDK tool uses schema definition to parse and validate the gbear/appmetadata.yaml file within a GBAP.

Validation Errors

Selected errors to augment the in-line validation error with more detail and examples

  • Error: The icon field must specify a valid http or https URL

    The icon field should specify a parsable http or https string, which resolves to either a png or svg file. Valid examples:

    • http://www.example.com/icon.png
    • https://www.example.com/icon.svg
    • https://www.example.com/icon.svg?text=test

    Note: The tool doesn’t attempt to download the URL.

Application Configuration Items

The configuration property contains a list of configuration items related to application deployment parameters. The tool validates each configuration item and outputs error messages annotated with the line number of the configuration item which contains the specified error.

Application Configuration Runtime Item Error

The Runtime configuration item type contains a value field which is interpreted as a Go Lang text template. The tool validates that the specified value is a syntactically correct text template, and that it contains substitutions which are semantically supported by the GB platform.

  • Error: [Line 53] The configuration item contains an invalid Runtime value format (Error: there are no template substitutions in the value).

    This specific error means that the configuration item declared at line 53 has a Runtime value which doesn’t contain a valid template syntax with substitutes. For example, the following configuration within a GBAP gbear/appmetadata.yaml will cause this error:

    52 configuration:
    53    - name: RUNTIME_1
    54       type: Runtime
    55       value: 'no value substitutions or functions'
    

    Valid Example 1:

    configuration:
        - name: RUNTIME_1
          type: Runtime
          value: '{{ .GB_SITE_ID }}' # template syntax containing a substitution syntax for a supported GB runtime variable.
    

    Valid Example 2:

    configuration:
        - name: RUNTIME_1
          type: Runtime
          value: '{{ randInt 0 100  }}' # template syntax containing a substitution syntax using a supported function
    
  • Error: [Line 60] The configuration item contains an invalid Runtime value format (Error: unclosed definition).

    This specific error means that the configuration item declared at line 60 has a Runtime value which contains invalid template syntax, a declared substitution has not been closed. For example, the following configuration within a GBAP gbear/appmetadata.yaml will cause this error:

    59 configuration:
    60    - name: RUNTIME_1
    61       type: Runtime
    62       value: 'Invalid format {{ .GB_SITE_NAME }} {{.GB_SITE_ID'
    

    Corrected example:

    configuration:
        - name: RUNTIME_1
          type: Runtime
          value: 'Invalid format {{ .GB_SITE_NAME }} {{.GB_SITE_ID }}'
    
  • Error: [Line 65] The configuration item contains an invalid Runtime value format (Error: function \"myFunc\" not defined).

    This specific error means that the configuration item declared at line 60 has a Runtime value which contains a specified template function which is not supported by the GB platform. (See list of supported functions). For example, the following configuration within a GBAP gbear/appmetadata.yaml will cause this error:

    64 configuration:
    65    - name: RUNTIME_1
    66       type: Runtime
    67       value: 'Invalid func {{ .GB_SITE_NAME | myFunc }}'
    

    Corrected example:

    configuration:
        - name: RUNTIME_1
          type: Runtime
          value: 'Invalid format {{ .GB_SITE_NAME | b64enc }} ' # b64enc is a supported function
    
  • Error: [Line 53] The configuration item contains an invalid Runtime value format ({{.GB_ID}} variable doesn't exist, choose one of GB_NODE_IDS, GB_NODE_NAMES, GB_SITE_ID, GB_SITE_NAME).

    This specific error means that the runtime value contains a substituted variable which is not supported by the GB platform. A substitution variable must be either:

    • A common GB runtime variable: GB_NODE_IDS, GB_NODE_NAMES, GB_SITE_ID, GB_SITE_NAME
    • Another configuration item name specified in your configuration items list

    For example, For example, the following configuration within a GBAP gbear/appmetadata.yaml will cause this error:

    52 configuration:
    53    - name: RUNTIME_1
    54       type: Runtime
    55       value: 'Invalid {{ .GB_ID }} value' # GB_ID is not known to GB
    

    Corrected Example:

    configuration:
        configuration:
        - name: NAME_1
          type: String
          title: "Enter your name"
        - name: RUNTIME_1
          type: Runtime
          value: 'Invalid {{ .NAME_1 }} {{ .GB_SITE_ID }} value' # NAME_1 is defined in your config, GB_SITE_ID is a valid GB platform variable
    

GBAP Publishing Errors

The SDK tool performs a validation step before attempting to publish a GBAP, common errors of this step are described in the Validation Errors section.

Publishing a valid GBAP may encounter errors when connecting to the Great Bear platform API. The errors shown in the following sections are the result of running the gbear app publish myGBAPPath/ command.

Invalid host URL

Example error:

Validating Package: myGBAPPath/
-- Validating Application Metadata...
-- Validating Chart Dependencies...
***
-- Validating Kubernetes Resource Labels...
Validation Complete:
Application Metadata: Passed (Errors: 0, Warnings: 0)
Chart Dependencies: Passed (Errors: 0, Warnings: 0)
Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
Building Package Archive File...
Publishing Package Archive File: myGBAPPath/my-app-0.0.1.tgz
-- ERROR: Post "https://gbear.io/api/v1/applications": .... no such host
Error: failed to publish the package to the Great Bear platform

Solution: Check the SDK tool config file $HOME/.config/gbear/config.yaml in your editor of choice. For example, run cat $HOME/.config/gbear/config.yaml

Ensure the host field is set to https://api.greatbear.io/api:

# Great Bear API endpoint to use (overridden by $GBEAR_HOST, defaults to https://api.greatbear.io/api)
host: https://api.greatbear.io/api
login:
  # apiKey is the API token that you can copy from the dashboard (overridden by $GBEAR_API_KEY)
   apiKey: <your api key>

If these steps do not help resolve your issue, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

Invalid API Key

Example error:

Validating Package: myGBAPPath/
-- Validating Application Metadata...
-- Validating Chart Dependencies...
***
-- Validating Kubernetes Resource Labels...
Validation Complete:
Application Metadata: Passed (Errors: 0, Warnings: 0)
Chart Dependencies: Passed (Errors: 0, Warnings: 0)
Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
Publishing Package Archive File: myGBAPPath/my-app-0.0.1.tgz
Publish Status: Failed - run `gbear login` to verify your registered API key
Error: failed to publish the package to the Great Bear platform. If the problem persists, please contact the Great Bear Support Team: great-bear-support@cisco.com

This means that your configured API Key is not known by the Great Bear server.

Solution:

Repeat the gbear login command and retry to publish.

If these steps do not help resolve your issue, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

GBAP Conflict Error

Example Error:

Validating Package: myGBAPPath/
-- Validating Application Metadata...
-- Validating Chart Dependencies...
***
-- Validating Kubernetes Resource Labels...
Validation Complete:
Application Metadata: Passed (Errors: 0, Warnings: 0)
Chart Dependencies: Passed (Errors: 0, Warnings: 0)
Kubernetes Resource Labels: Passed (Errors: 0, Warnings: 0)
Building Package Archive File...
Publishing Package Archive File: myGBAPPath/my-app-0.0.1.tgz
Publish Status: Failed - 409 Conflict
Error: failed to publish the package to the Great Bear platform

This means that the GBAP version you are attempting to publish already exists in the Great Bear Application Store. A conflict error is encountered because published package versions are immutable.

Solution:

Follow the procedure to update an existing package application.

If these steps do not help resolve your issue, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.

1.11 - Quick Reference

Important URLs

The following is a quick reference to the various services that you’ll make use of with Great Bear.

Dashboard - https://dashboard.greatbear.io

Observability -https://edge-monitoring.greatbear.io

Application Control UI - https://app-control.greatbear.io

Network connectivity

Preferred deployment scenario is with direct access to the Internet. Refer to the documentation for specific connectivity requirements for your given device platform.

1.12 - Getting Support

We really value any feedback that you have on how we can improve Great Bear. Feel free to contact the Great Bear Support Team with any suggestions that you have or bugs that you encounter - we’d love to hear from you!

If at some point you have an issue that’s not covered in this documentation, then please contact the Great Bear administrator of your organization, or the Great Bear Support Team.