forked from qoherent/modrec-workflow
84 lines
3.4 KiB
Markdown
84 lines
3.4 KiB
Markdown
# ModrecWorkflow Demo
|
|
This project automates the process of generating data, training, and deploying the modulation recognition model for radio singal classification. The workflow is intended to support experimentation, reproducibility, and deployment of machine learning models for wireless signal modulation classification, such as QPSK, 16-QAM, BPSK,
|
|
|
|
## Getting Started
|
|
|
|
1. Clone the Repository
|
|
|
|
```commandline
|
|
git clone https://github.com/yourorg/modrec-workflow.git
|
|
cd modrec-workflow
|
|
```
|
|
|
|
2. Configure the Workflow
|
|
|
|
All workflow parameters (data paths, model architecture, training settings) are set in 'conf/app.yaml'
|
|
|
|
Example:
|
|
```commandline
|
|
dataset:
|
|
input_dir: data/recordings
|
|
num_slices: 8
|
|
train_split: 0.8
|
|
val_split : 0.2
|
|
```
|
|
|
|
### Configure GitHub Secrets
|
|
|
|
Before running the pipeline, add the following repository secrets in GitHub (Settings → Secrets and variables → Actions):
|
|
|
|
- **RIAHUB_USER**: Your RIA Hub username.
|
|
- **RIAHUB_TOKEN**: RIA Hub access token with `read:packages` scope (from your RIA Hub account **Settings → Access Tokens**).
|
|
- **CLONER_TOKEN**: Personal access token for `stark_cloner_bot` with `read_repository` scope (from your on-prem Git server user settings).
|
|
|
|
Once secrets are configured, you can run the pipeline:
|
|
|
|
|
|
3. Run the Pipeline
|
|
Once you update the changes to app.yaml, you can make any push or pull to your repo to start running the workflow
|
|
|
|
## Artifacts Created
|
|
After Successful execution, the workflow produces serveral artifacts in the output
|
|
- dataset
|
|
- This is a folder containing to .h5 datasets called train and val
|
|
- Checkpoints
|
|
- Contains saved model checkpoints, each checkpoint includes the models learned weights at various stages of training
|
|
- ONNX File
|
|
- The ONNX file contains the trained model in a standardized format that allows it to be run efficiently across different platforms and deployment environments.
|
|
- JSON Trace File (*json)
|
|
- Captures a full trace of model training and inference perfomance for profiling and debugging
|
|
- Useful for identifying performance bottlenecks, optimizing resource usage, and tracking metadata
|
|
- ORT File (*ort)
|
|
- This is an ONNX Runtime (ORT) model file, optimized for fast inference on various platforms
|
|
- Why is it Useful?
|
|
- You can deploy this file on edge devices, servers or integrate it into the production systems for real-time signal classification
|
|
- ORT files are class-platform and allow easy inference acceleration using ONNX Runtime
|
|
|
|
|
|
## How to View the JSON Trace File
|
|
|
|
Access this [link](https://ui.perfetto.dev/)
|
|
Click on Open Trace File -> Select your specific JSON trace file
|
|
Explore detailed visualizations of performance metrics, timelines, and resource usage to diagnose bottlenecks and optimize your workflow.
|
|
|
|
|
|
|
|
## Submiting Issues
|
|
Found a bug or have a feature request?
|
|
Please submit an issue via the GitHub Issues page.
|
|
When reporting bugs, include:
|
|
Steps to reproduce
|
|
- Error logs and screenshots (if applicable)
|
|
- Your app.yaml configuration (if relevant)
|
|
|
|
|
|
|
|
## Developer Details
|
|
Coding Guidelines:
|
|
Follow PEP 8 for Python code style.
|
|
Include type annotations for all public functions and methods.
|
|
Write clear docstrings for modules, classes, and functions.
|
|
Use descriptive commit messages and reference issue numbers when relevant.
|
|
Contributing
|
|
All contributions must be reviewed via pull requests.
|
|
Run all tests and ensure code passes lint checks before submission. |