Update Model Builder instructions to use checkpoint (.ckpt) files

This commit is contained in:
L lswersk 2026-04-21 15:12:57 -04:00
parent ab23ee50f0
commit 5b94865b86

View File

@ -31,7 +31,7 @@ RIA_Example/
│ └── example_synthetic_dataset.h5 # Synthetically generated dataset (Generator output)
├── models/
│ ├── example_model.pt # PyTorch Module (Model Trainer input / output)
│ ├── example_model.ckpt # PyTorch Module (Model Trainer input / output)
│ └── example_model.onnx # Exported ONNX model (Screens / Application Packager input)
├── applications/
@ -67,9 +67,9 @@ The Library is a cross-repository browser for all RF and ML assets on the platfo
|------|-----------|-------------|
| Recording | `.h5` / `.hdf5` | Raw IQ capture files |
| Radio Dataset | `.h5` / `.hdf5` | Labelled, curated training datasets |
| PyTorch Module | `.pt` / `.pth` | Serialized PyTorch models |
| PyTorch State Dict | `.pt` / `.pth` | Model weight dictionaries |
| PyTorch Checkpoint | `.pt` / `.pth` | Mid-training checkpoints |
| PyTorch Module | `.py` | PyTorch model definitions with a nn.Module class |
| PyTorch State Dict | `.pt` / `.pth` | Model weights / state dictionaries |
| PyTorch Checkpoint | `.ckpt` | Training checkpoints with weights, optimizer state, and metadata |
| ONNX Graph | `.onnx` | Portable inference models |
---
@ -153,13 +153,13 @@ The Generator creates synthetic labelled datasets from a parameter sweep without
The Model Trainer builds a training workflow YAML and commits it to your repository. A Gitea Actions runner then executes the training job.
**Example files:** `datasets/example_radio_dataset.h5`, `models/example_model.pt` (optional pre-trained start)
**Expected output:** `.riahub/workflows/train.yaml` in your repository, plus a trained `example_model.pt` artifact
**Example files:** `datasets/example_radio_dataset.h5`, `models/example_model.ckpt` (optional pre-trained start)
**Expected output:** `.riahub/workflows/train.yaml` in your repository, plus a trained `example_model.ckpt` artifact
**Steps:**
1. Go to **Model Builder → Model Trainer**.
2. In **Repository**, select the repository where you want to store the workflow and output artifacts.
3. In **Model**, choose an architecture (e.g. `ResNet1D`) or use `example_model.pt` as a starting checkpoint.
3. In **Model**, choose an architecture (e.g. `ResNet1D`) or use `example_model.ckpt` as a starting checkpoint.
4. In **Dataset**, select `example_radio_dataset.h5` from the Library.
5. Configure training:
- **Optimizer:** `Adam`, learning rate `1e-3`
@ -197,12 +197,12 @@ HPO runs a sweep over a configurable search space, training multiple model varia
Compression applies pruning and/or quantization to reduce model size for edge deployment. The output is an ONNX file.
**Example files:** `models/example_model.pt`, `datasets/example_radio_dataset.h5`
**Example files:** `models/example_model.ckpt`, `datasets/example_radio_dataset.h5`
**Expected output:** `models/example_model.onnx`
**Steps:**
1. Go to **Model Builder → Compression**.
2. Select `example_model.pt` as the source model and `example_radio_dataset.h5` as the calibration dataset.
2. Select `example_model.ckpt` as the source model and `example_radio_dataset.h5` as the calibration dataset.
3. Configure the compression pipeline (pruning ratio, quantization bits).
4. Click **Commit Workflow**. The Actions job exports the compressed model to ONNX automatically.
5. The resulting `.onnx` file is committed back to your repository.