Note: There's now an official guide from Radxa, but I still like my own :)
This guide details how to leverage the Orion O6's (CIX P1 CD8180) NPU (Neural Processing Unit) for hardware-accelerated object detection in Frigate. By offloading detection to the NPU, you can process high-resolution streams with minimal CPU usage.
Photo by Shir Danieli, modified to look "security camera-ish"
Before running Frigate, you must install the necessary drivers and AI models on the Orion O6 host operating system.
Follow the Radxa NPU SDK Installation guide. You strictly need the runtime drivers, not the compilation tools.
- Download the SDK and extract it.
- Install the User Mode Driver (UMD) and Kernel Driver:
sudo dpkg -i cix-npu-driver_xxx_arm64.deb
sudo dpkg -i cix-noe-umd_xxx_arm64.deb
(Ignore packages related to "compiler" or "builder" unless you plan to compile models yourself)
This repository contains the pre-compiled .cix models required for inference.
Follow these instructions to install the CIX AI Model Hub.
- Install Git LFS (Crucial for large model files):
sudo apt-get update && sudo apt-get install git-lfs
git lfs install
-
Clone the Model Hub: Navigate to your intended configuration directory for Frigate (e.g.,
/home/radxa/frigate/config) and clone the repo listed in the instructions. Note: You may delete models in subdirectories you do not plan to use to save space. -
Verify Model Functionality: Test that the NPU is working by running a sample inference script directly on the host:
python3 inference_npu.py
If you have problems with libnoe or similar Cix specific dependencies, run install-local-npu-dependencies.sh located in the repo. You can also search for them manually with find -name.
Ensure your project folder matches this structure. This keeps your configuration, custom plugins, and media storage organized.
radxa@RadxaOrionO6 ~/Docker_Containers/frigate
.
├── docker-compose.yml
├── config
│ ├── config.yaml
│ ├── ai_model_hub_25_Q3/ <-- Cloned Repo
│ └── model_cache
│ └── oriono6
│ └── yolox_m.cix <-- Copy your preferred model here for easy access
├── plugins
│ └── detectors
│ └── oriono6.py <-- Custom plugin file
└── storage
├── clips
├── exports
└── recordings
Configure your docker-compose.yml to map the NPU devices and your custom plugin directories. The "ffmpeg hardware transplant" relies on mounting the host's specific ffmpeg binaries or libraries if you are not using a custom-built image.
services:
frigate:
container_name: frigate
privileged: true # Required for hardware access
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "128mb" # Recommended for multiple cameras
# CRITICAL: Tell Frigate to check the custom Radxa folders for libraries
environment:
- LD_LIBRARY_PATH=/opt/cixgpu-compat/lib/aarch64-linux-gnu:/opt/cixgpu-pro/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu:/lib/aarch64-linux-gnu:/usr/local/lib
ports:
- "8971:8971"
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC
- "8555:8555/udp" # WebRTC
volumes:
# --- Config & Storage ---
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
# --- HARDWARE TRANSPLANT ---
- /usr/bin/ffmpeg:/usr/lib/custom_ffmpeg/bin/ffmpeg:ro
- /usr/bin/ffprobe:/usr/lib/custom_ffmpeg/bin/ffprobe:ro
- /lib/aarch64-linux-gnu:/lib/aarch64-linux-gnu:ro
- /usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu:ro
- /opt/cixgpu-compat:/opt/cixgpu-compat:ro
- /opt/cixgpu-pro:/opt/cixgpu-pro:ro
# --- PLUGIN & SDK MOUNTS ---
# 1. Map your custom plugin folder
- ./plugins/detectors/oriono6.py:/opt/frigate/frigate/detectors/plugins/oriono6.py
# 2. Map the Radxa AI Hub Code (so we can import 'utils')
- /home/radxa/Downloads/ai_model_hub_25_Q3:/opt/radxa_ai_hub:ro
# 3. Map the Venv Libraries (so we can import 'libnoe')
- /home/radxa/Downloads/ai_model_hub_25_Q3/venv/lib/python3.11/site-packages:/opt/radxa_venv:roUpdate your Frigate config to use the custom detector.
ffmpeg:
path: /usr/lib/custom_ffmpeg
hwaccel_args:
- -hwaccel
- drm
- -hwaccel_device
- /dev/dri/renderD128
# Frigate reads this to set up the hardware video pipeline correctly
model:
path: /config/model_cache/oriono6/yolox_m.cix
width: 640
height: 640
input_tensor: nhwc
input_pixel_format: bgr
input_dtype: int # Metadata only, we handle float conversion in script
detectors:
orion:
type: oriono6
cameras:
LivingRoom:
enabled: true
ffmpeg:
inputs:
- path: rtsp://PLACEHOLDER:RTSP_STREAM@192.168.X.X
roles:
- detect
- record
detect:
enabled: true
width: 1920
height: 1080
fps: 5
objects:
track:
- person
detect:
enabled: true
fps: 5
version: 0.16-0
Launch the container:
sudo docker compose up -d
Check the logs to confirm the plugin loaded:
sudo docker logs frigate -f
Look for the confirmation line:
oriono6.py: --- ORION YOLOX (OPTIMIZED) --- Ready.
The Orion O6 is capable of impressive performance. Using yolox_m.cix on a 1080p stream typically results in roughly 40ms inference speeds, leaving the CPU idle for other tasks.
The docker compose also sets up an ffmpeg hardware transplant to enable hardware encoding such that it will use ffmpeg version 5.1.6-0+deb12u1+cix instead of the default ffmpeg installation.



