Update README.md

This commit is contained in:
J jonny 2026-04-21 13:00:47 -04:00
parent ebb4a9f685
commit ab23ee50f0

View File

@ -35,18 +35,13 @@ RIA_Example/
│ └── example_model.onnx # Exported ONNX model (Screens / Application Packager input) │ └── example_model.onnx # Exported ONNX model (Screens / Application Packager input)
├── applications/ ├── applications/
│ └── example_application.json # Application Composer output (Application Packager) │ └── example_application.json # Application Composer output, can be built in RIA Screens (Application Packager)
├── screens-apps/
│ └── zone-fingerprinting/
│ ├── manifest.json # Screens app manifest
│ ├── zone_fingerprint.onnx # ONNX model for this app
│ └── zone-fingerprinting.tar.gz # Packaged app (upload directly to Screens)
├── workflows/ ├── workflows/
│ ├── train.yaml # Example Model Trainer workflow (committed to .riahub/workflows/) │ ├── train.yaml # Example Model Trainer workflow (committed to .riahub/workflows/)
│ ├── hpo.yaml # Example HPO workflow
│ └── compression.yaml # Example Compression workflow
└── curator-configs/ └── curator-configs/
└── example_curator_config.json # Example curation configuration for the Curator tool └── example_curator_config.json # Example curation configuration for the Curator tool
@ -237,38 +232,16 @@ The application JSON format is documented in `schemas/application/ria_applicatio
| `native-x86` | Standard x86 Linux deployment | | `native-x86` | Standard x86 Linux deployment |
| `native-arm64` | ARM edge devices | | `native-arm64` | ARM edge devices |
| `nvidia-x86` | GPU-accelerated inference on x86 | | `nvidia-x86` | GPU-accelerated inference on x86 |
| `RIA Screens` | Web based runtime environment to monitor app |
--- ---
### Screens ### Screens
Screens deploys a packaged RF inference application to a live pipeline. You upload a `.tar.gz` app package, configure a data source (live SDR, file playback, or synthetic), and start the pipeline. Results stream back to the browser in real time. Screens deploys a packaged RF inference application to a live pipeline. You build an app from Application Composer, configure a data source (live SDR, file playback, or synthetic), and start the pipeline. Results stream back to the browser in real time.
**Example files:** `screens-apps/zone-fingerprinting/zone-fingerprinting.tar.gz`
#### Uploading and running the Zone Fingerprinting demo
The Zone Fingerprinting app classifies RF devices in real time into five device classes (three authorized, two unauthorized) using an ONNX model and a 128-feature IQ preprocessor.
**Steps:**
1. Go to **Screens** and click **New App**.
2. Give it a name (e.g. `Zone Fingerprinting Demo`) and click **Create**.
3. On the app page, click **Upload Package** and upload `zone-fingerprinting.tar.gz`.
4. The app is now configured. To run it with a synthetic signal (no hardware needed):
- The default `manifest.json` uses `dataSource.type: synthetic` — no changes required.
5. Click **Start**. The inference pipeline starts and begins streaming results.
6. The dashboard shows:
- Live classification scores per device class
- Confidence threshold control
- Spectrogram panel
- Preprocessor feature monitor
- Event log
**To run with a real SDR (PlutoSDR):**
1. Ensure your SDR device is connected and detected.
2. Edit the app configuration and change `dataSource.type` to `sdr` with your device identifier.
3. Set `center_frequency`, `sample_rate`, and `gain` to match your signal of interest.
4. Click **Restart**.
#### App package format #### App package format
@ -276,7 +249,6 @@ A Screens app package is a `.tar.gz` containing:
- `manifest.json` — describes the app (models, GUI layout, data source, preprocessor) - `manifest.json` — describes the app (models, GUI layout, data source, preprocessor)
- ONNX model file(s) at the path(s) listed in `manifest.models[].path` - ONNX model file(s) at the path(s) listed in `manifest.models[].path`
See `screens-apps/zone-fingerprinting/manifest.json` for a complete annotated example. The full manifest schema is at `schemas/screens/app_manifest.schema.json`.
**Data source types:** **Data source types:**
@ -325,41 +297,6 @@ dataset.h5
└── attrs # Dataset-level attributes: name, version, radio_task, backend └── attrs # Dataset-level attributes: name, version, radio_task, backend
``` ```
### Screens App Manifest (`manifest.json`)
```json
{
"name": "My App",
"version": "1.0.0",
"description": "Brief description",
"author": "Your Name",
"models": [
{
"name": "classifier",
"path": "model.onnx",
"backend": "onnxrt-cpu"
}
],
"preprocess": "magnitude_phase_window_stats",
"dataSource": {
"type": "synthetic",
"params": {
"sample_rate": 1000000,
"center_frequency": 915000000,
"buffer_size": 1024
}
},
"components": [
{
"component": "BackendInferenceOutput",
"props": { "modelName": "classifier", "topK": 3 }
}
],
"config": {
"confidence_threshold": 0.75
}
}
```
### Application JSON (`application.json`) ### Application JSON (`application.json`)