This repository contains example input files, configurations, and expected outputs for every tool on RIA Hub. If you are new to the platform, start here. Download any file and follow the walkthrough for the tool you want to try.
---
## What is RIA Hub?
RIA Hub is a collaborative platform for RF and machine learning workflows. It combines a Git-based repository system with a suite of specialized tools that cover the full pipeline from raw IQ recordings to live inference deployments:
| Stage | Tools |
|-------|-------|
| **Collect** | Library — browse, organize, and share RF recordings and models |
| **Curate** | Dataset Manager — slice, qualify, augment, and inspect radio datasets |
| **Train** | Model Builder — train, optimize, and compress PyTorch models |
│ ├── zone_fingerprint.onnx # ONNX model for this app
│ └── zone-fingerprinting.tar.gz # Packaged app (upload directly to Screens)
│
├── workflows/
│ ├── train.yaml # Example Model Trainer workflow (committed to .riahub/workflows/)
│ ├── hpo.yaml # Example HPO workflow
│ └── compression.yaml # Example Compression workflow
│
└── curator-configs/
└── example_curator_config.json # Example curation configuration for the Curator tool
```
---
## Tool Walkthroughs
### Library
The Library is a cross-repository browser for all RF and ML assets on the platform. It automatically discovers files pushed to any repository you have access to.
**To explore the example recording:**
1. Import `recordings/example_iq_recording.h5` into any repository via **New Repository → Upload Files** or by pushing via Git LFS.
2. Navigate to **Library** in the top navigation bar.
3. Select the **Recordings** tab. Your file will appear with metadata and a spectrogram thumbnail.
4. Click the file to open the detail view — you can inspect signal properties, view the spectrogram, and copy the file to another repository.
**Supported asset types in the Library:**
| Type | Extension | Description |
|------|-----------|-------------|
| Recording | `.h5` / `.hdf5` | Raw IQ capture files |
| Radio Dataset | `.h5` / `.hdf5` | Labelled, curated training datasets |
The Curator takes raw IQ recordings and produces a labelled, ready-to-train HDF5 dataset. It applies a configurable DSP pipeline: slicing, quality filtering, and optional augmentation.
2. Select `example_model.pt` as the source model and `example_radio_dataset.h5` as the calibration dataset.
3. Configure the compression pipeline (pruning ratio, quantization bits).
4. Click **Commit Workflow**. The Actions job exports the compressed model to ONNX automatically.
5. The resulting `.onnx` file is committed back to your repository.
---
### Application Packager — Application Composer
The Application Composer is a visual node-graph editor for wiring together C++ operator blocks into an inference application. The output is an application JSON file.
1. Go to **Application Packager → Application Composer**.
2. Browse the **Operators** panel on the left. Drag an operator onto the canvas.
3. Wire operator ports together by dragging from an output port to an input port.
4. Configure each operator's parameters in the sidebar.
5. Click **Commit Application** to save `example_application.json` to your repository.
6. Click **Build** to trigger a build workflow on a registered runner.
The application JSON format is documented in `schemas/application/ria_application.schema.json`. See `applications/example_application.json` for a minimal working example.
**Target profiles:**
| Profile | Use when |
|---------|----------|
| `native-x86` | Standard x86 Linux deployment |
| `native-arm64` | ARM edge devices |
| `nvidia-x86` | GPU-accelerated inference on x86 |
---
### Screens
Screens deploys a packaged RF inference application to a live pipeline. You upload a `.tar.gz` app package, configure a data source (live SDR, file playback, or synthetic), and start the pipeline. Results stream back to the browser in real time.
#### Uploading and running the Zone Fingerprinting demo
The Zone Fingerprinting app classifies RF devices in real time into five device classes (three authorized, two unauthorized) using an ONNX model and a 128-feature IQ preprocessor.
**Steps:**
1. Go to **Screens** and click **New App**.
2. Give it a name (e.g. `Zone Fingerprinting Demo`) and click **Create**.
3. On the app page, click **Upload Package** and upload `zone-fingerprinting.tar.gz`.
4. The app is now configured. To run it with a synthetic signal (no hardware needed):
- The default `manifest.json` uses `dataSource.type: synthetic` — no changes required.
5. Click **Start**. The inference pipeline starts and begins streaming results.
6. The dashboard shows:
- Live classification scores per device class
- Confidence threshold control
- Spectrogram panel
- Preprocessor feature monitor
- Event log
**To run with a real SDR (PlutoSDR):**
1. Ensure your SDR device is connected and detected.
2. Edit the app configuration and change `dataSource.type` to `sdr` with your device identifier.
3. Set `center_frequency`, `sample_rate`, and `gain` to match your signal of interest.
4. Click **Restart**.
#### App package format
A Screens app package is a `.tar.gz` containing:
-`manifest.json` — describes the app (models, GUI layout, data source, preprocessor)
- ONNX model file(s) at the path(s) listed in `manifest.models[].path`
See `screens-apps/zone-fingerprinting/manifest.json` for a complete annotated example. The full manifest schema is at `schemas/screens/app_manifest.schema.json`.
**Data source types:**
| Type | Description |
|------|-------------|
| `synthetic` | Built-in AWGN tone generator — no hardware required |
| `recording` | Play back a `.h5` IQ recording from the Library |
| `sdr` | Live data from a connected SDR device |
| `agent` | Live data from a remote SDR via an edge agent node |
When building your own Screens app, export your model to ONNX with matching input/output names and shapes, then reference them in `manifest.json`.
---
### RIA Projects
Projects group your datasets, models, training runs, and deployed applications into a single tracked entity. The project dashboard shows a three-stage pipeline view: Data Management → Model Building → Deployment.
**Steps:**
1. Go to **Projects → New Project**.
2. Name your project and create it.
3. Link assets from the Library using the **Link Asset** button on each pipeline stage.
4. As you run Curator, Model Trainer, and Screens jobs, link the outputs to track progress through the pipeline.
---
## File Format Reference
### HDF5 Radio Dataset (`.h5`)
Curated and generated datasets share a common HDF5 layout:
```
dataset.h5
├── data/ # IQ samples, shape [N, slice_length, 2] (float32)
├── labels/ # Integer class labels, shape [N]
├── metadata/ # Recording metadata carried through from source