Skip to main content

Edge AI

The Edge AI add-on equips AIM-Linux platforms with accelerated inference pipelines, containerised deployment patterns, and integration templates for NVIDIA Jetson, NXP i.MX, and Qualcomm hardware. Refer to the official guide for detailed instructions: AIM-Linux AddOn Edge AI.

Capabilities

  • Model Deployment – Supports TensorRT, OpenVINO, ONNX Runtime, and GStreamer vision pipelines.
  • Container Templates – Pre-built Docker/Podman images for YOLO, classification, OCR, and anomaly detection workloads.
  • Data Ingestion – Connect USB/MIPI cameras, industrial sensors, and protocol data via the Protocol add-on.
  • Lifecycle Management – Roll out, monitor, and update AI workloads through DeviceOn policies.

Deployment Workflow

  1. Choose a hardware profile (Jetson, i.MX 8M Plus, QCS6490) and install the matching BSP.
  2. Enable GPU/NPU drivers and runtime packages supplied with the Edge AI add-on.
  3. Import or convert models to the target accelerator (e.g., trtexec, edgetpu_compiler).
  4. Package inference services as containers or systemd services and publish them through DeviceOn.

Optimisation Tips

  • Benchmark pipelines with the diagnostic tools before production rollout.
  • Use INT8/FP16 quantisation where supported to reduce latency and power usage.
  • Monitor inference metrics and thermal headroom via the Management add-on.
  • Coordinate with the Security add-on to protect model assets and API endpoints.

Resources