Container introduction

Unwired Edge Cloud provides container virtualization for far-edge devices including industrial routers, access points and industrial compute boxes (SBCs, compute modules, COM express modules,…).

This container virtualization allows running arbitrary application workloads on those industrial devices, in a secure and efficient manner, without any OS level knowledge or development requirements.

The edge virtualization feature consists of:

  • an orchestration to deploy and run containers across a fleet of devices

  • an orchestration to run multiple containers on a device with a container runtime

  • a container runtime to run a container isolated from the rest of the system

Containers have several base features:

  • able to run arbitrary OCI containers (see limitations below)

  • isolation from the host OS and other container workloads

  • access to persistent storage

  • memory limits

  • advanced network configuration

    • allows interfacing to LAN and WAN sides of routers at the same time

    • interfacing with multiple lan interfaces

    • automatically generated subnets for routed applications or roaming

    • automatic LAN and WAN network interface configuration

  • per container instance config maps for instance configuration

  • time good status (RTC and system time initialized and valid)

  • access to local physical interfaces (serial bus, CAN, GPUs, neural processing engines, GPIOs, system management controllers)

  • access to all network uplink options from Unwired Edge Cloud OS included transparently bonded uplinks stretching multiple arbitrary modems or uplink interfaces (CloudLink)

  • automatically injected Unwired Edge Cloud API keys with specific roles

  • monitoring through prometheus metrics (scraped up to 1 MB per container)

Containers are completely lifecycle managed by the Unwired Edge Cloud:

  • providing a new container version will automatically deploy that version to all affected devices

  • container rollouts will be monitored for healthiness and rolled back automatically

  • lifecycle of a container application is separate from the host OS lifecycle

Limitations:

  • the source image arch must match the target arch

    • currently supported architectures are amd64, armhf, arm64, and ppc64le

  • OCI containers are started with tini as pid 1, so they cannot launch an init system that requires pid 1 for itself (e.g. S6 Overlay is not supported)

  • /run is mounted at container startup to a tmpfs (so it’s content will be lost on container restart and also override anything provided under /run in the original image)

  • busybox reboot does not work and leads to a stopped container. to properly reboot the container, busybox reboot -f must be used.

    • note that alpine linux images use busybox to provide reboot and other common system tools

Typical use cases

Typical use cases are:

  • public WiFi portals

  • media servers

  • data storage

  • IoT: processing, local aggregation and data uplink to the internet

  • public transport:

    • passenger information (PIS)

    • automatic passenger counting (APC)

  • remote management

    • VPN tunnels

    • local management IP address allocation for cameras, screens and other LAN devices

    • remote access to those devices for upgrades and management

You can see some demos within this documentation.

Data model

The orchestration consist of:

  • a container image (OCI)

    • this is the actual source image

  • a node service (Unwired Edge Cloud API)

    • this is the definition of how the images are run in a fleet of devices

    • includes network and other configuration

    • defines the current target version to be deployed

  • for each deployed container: a node service instance (Unwired Edge Cloud API)

    • this is an actually deployed node service on a device

    • each of these instances can have custom config maps

The orchestration will primarily act through configuration changes to the device.

OCI images are imported to node services through the DM_import_node_service mutation. This import step will load the OCI image from its source into the Unwired Edge Cloud and snapshot the image there for further distribution to the devices.

Additionally in the DM_import_node_service mutation all other configuration settings that affect the fleet deployment, like networking, access to local physical interfaces or memory and cpu limits will be defined.

A node service is a deployment instruction on how to perform fleet deployments.

Node service instances are created by adding a container to a device. If automatic IP subnet allocation is enabled for the node service, a subnet unique within the fleet will be allocated. The only configurable option is to inject additional data via per instance config maps.

Host-provided information and communication

Warning

The files /lxc.env, /.time-status, /prometheus.prom and /host/ubus.sock are deprecated and must not be used, they will be removed in future releases.

Time good status

Within the container non-empty content in the file /host/.time-status indicates that the system time was externally synced via NTP. This is important for many services, since a non-synced system time can result in arbitrary and huge time jumps into the future or the past.

This typically happens because the RTC is not initialized.

lxc.env for environment variables

Every environment variable injected into the node_service in the import mutation will end up in the file /host/lxc.env and contain those variables in a way where the file can be sourced by any shell (e.g. . /host/lxc.env).

Network configurations

LAN and WAN networking allow different kinds of network configurations:

WAN networking (option wan_configure_network):

  • disabled (the orchestration will not perform any network configuration)

  • allocated (the orchestration will assign an IP from the client network uplink)

  • dhcp (the orchestration will run a dhcp client on the WAN interface)

    • this is mainly used if the container’s WAN uplink is bridged to 3rd party VM DHCP servers in the network

LAN networking (option lan_configure_network):

  • disabled (the orchestration will not perform any network configuration)

  • allocated (the orchestration will assign an IP from the provided client network)

    • IPs will be chosen from the automatically or manually configured container LAN subnet

  • dhcp (the orchestration will run a dhcp server on the first LAN interface)

    • this will use either the subnet provided or the subnet allocated from the DHCP range template if the feature is used

DHCP range templates

LAN interfaces can be:

  • allocated to a fixed subnet

  • allocated from a subnet template by the orchestration

    • this feature allows to automatically allocate unique subnets per container instance across a whole fleet

In the node_service_input of the import mutation, there are these two options to control the behavior:

node_service_input {
  ...
  access_provided_subnet: "10.190.0.1/16"
  dhcp_range_template: "10.190.0.1/24"
  ...
}

This will allocate a unique /24 for each container instance within the /16 subnet. This results in:

  • the interface will be configured with the /16 subnet

    • for lan_configure_network set to allocated and dhcp

  • the dhcp server will be configured with address ranges from the /24 subnet

    • only if lan_configure_network is set to dhcp

DHCP one subnet for all instances of a node_service

For all instances of the same node_service the same DHCP range and subnet will be used like this:

node_service_input {
  ...
  access_provided_subnet: "10.190.0.1/16"
  dhcp_range: "10.190.0.1/24"
  ...
}

This will allocate the single provided /24 for each container instance within the /16 subnet. This results in:

  • the interface will be configured with the /16 subnet

    • for lan_configure_network set to allocated and dhcp

  • the dhcp server will be configured with the single provided /24 subnet

    • only if lan_configure_network is set to dhcp

Forwarding/NAT/masquerading

If both LAN and WAN networking are not disabled, the orchestration can optionally insert an automatic MASQUERADE rule to forward and masquerade/NAT traffic from LAN to the WAN interface (which is a very common router scenario and required for devices connected to the LAN interface to connect to the internet).

Private registry authentication

If your source image is private or otherwise requires some form of authentication, then you must supply it in the image_sources as part of the input mutation.

For your reference here are couple of common examples:

Google Container Registry

To use standard registry authentication with the GCR, there are several options:

  • Use a JSON keyfile as the password and _json_key as the user

  • Use a short-lived access token obtained from gcloud auth print-access-token as the password and oauth2accesstoken as the user

An example authn with eu.gcr.io via short-lived access token:

image_sources: [
  {
    arch: amd64
    image_reference: "eu.gcr.io/your-account/your-image:latest"
    authn: {
      registry_user: "oauth2accesstoken"
      registry_pw: "your.accesstoken.here"
    }
  }
]

For further details, see Google Container Registry - Advanced Authentication.

GitHub Container Registry

To use standard registry authentication with the GHCR, there are several options:

  • Use your GitHub Username as user and PAT/Personal Access Token (Classic) as password (ensure it has read:packages scope)

  • If you are importing the image as part of a GitHub action, use your GitHub Username as user and use the value from GITHUB_TOKEN env as password

An example authn with ghcr.io via PAT:

image_sources: [
  {
    arch: amd64
    image_reference: "ghcr.io/your-account/your-image:latest"
    authn: {
      registry_user: "your-account"
      registry_pw: "your.pat.goes.here"
    }
  }
]

For further details, see GitHub Container Registry - Authenticating to the container registry.