forked from qoherent/modrec-workflow
updated readme
This commit is contained in:
parent
e5fb0ebff6
commit
4b48b98bae
85
README.md
85
README.md
|
@ -1,23 +1,74 @@
|
||||||
# RIA Hub Technical Demo
|
# ModrecWorkflow Demo
|
||||||
|
This project automates the process of generating data, training, and deploying the modulation recognition model for radio singal classification. The workflow is intended to support experimentation, reproducibility, and deployment of machine learning models for wireless signal modulation classification, such as QPSK, 16-QAM, BPSK,
|
||||||
|
|
||||||
This repository demonstrates a full ML pipeline via Gitea Actions:
|
## Getting Started
|
||||||
|
|
||||||
- **Recordings**
|
1. Clone the Repository
|
||||||
A collection of raw `.npy` radio recordings stored via Git LFS.
|
|
||||||
|
|
||||||
- **Workflows**
|
```commandline
|
||||||
A CI pipeline that automatically:
|
git clone https://github.com/yourorg/modrec-workflow.git
|
||||||
1. Builds a labeled dataset from raw recordings
|
cd modrec-workflow
|
||||||
2. Trains a model on that dataset
|
```
|
||||||
3. Optimizes the model and packages an inference application
|
|
||||||
|
|
||||||
- **Scripts**
|
2. Configure the Workflow
|
||||||
- `scripts/build_dataset.sh`
|
|
||||||
Reads through `recordings/`, applies preprocessing, and outputs training `.npz` or `.csv` files into `data/`.
|
All workflow parameters (data paths, model architecture, training settings) are set in 'conf/app.yaml'
|
||||||
- `scripts/train_model.sh`
|
|
||||||
Consumes `data/`, trains a PyTorch model, and writes checkpoints to `checkpoints/`.
|
Example:
|
||||||
- `scripts/build_app.sh`
|
```commandline
|
||||||
Takes the best checkpoint and builds a small inference CLI or server in `dist/`.
|
dataset:
|
||||||
|
input_dir: data/recordings
|
||||||
|
num_slices: 8
|
||||||
|
train_split: 0.8
|
||||||
|
val_split : 0.2
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Run the Pipeline
|
||||||
|
Once you update the changes to app.yaml, you can make any push or pull to your repo to start running the workflow
|
||||||
|
|
||||||
|
## Artifacts Created
|
||||||
|
After Successful execution, the workflow produces serveral artifacts in the output
|
||||||
|
- dataset
|
||||||
|
- This is a folder containing to .h5 datasets called train and val
|
||||||
|
- Checkpoints
|
||||||
|
- Contains saved model checkpoints, each checkpoint includes the models learned weights at various stages of training
|
||||||
|
- ONNX File
|
||||||
|
- The ONNX file contains the trained model in a standardized format that allows it to be run efficiently across different platforms and deployment environments.
|
||||||
|
- JSON Trace File (*json)
|
||||||
|
- Captures a full trace of model training and inference perfomance for profiling and debugging
|
||||||
|
- Useful for identifying performance bottlenecks, optimizing resource usage, and tracking metadata
|
||||||
|
- ORT File (*ort)
|
||||||
|
- This is an ONNX Runtime (ORT) model file, optimized for fast inference on various platforms
|
||||||
|
- Why is it Useful?
|
||||||
|
- You can deploy this file on edge devices, servers or integrate it into the production systems for real-time signal classification
|
||||||
|
- ORT files are class-platform and allow easy inference acceleration using ONNX Runtime
|
||||||
|
|
||||||
|
|
||||||
|
## How to View the JSON Trace File
|
||||||
|
|
||||||
|
Access this [link](https://ui.perfetto.dev/)
|
||||||
|
Click on Open Trace File -> Select your specific JSON trace file
|
||||||
|
Explore detailed visualizations of performance metrics, timelines, and resource usage to diagnose bottlenecks and optimize your workflow.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Submiting Issues
|
||||||
|
Found a bug or have a feature request?
|
||||||
|
Please submit an issue via the GitHub Issues page.
|
||||||
|
When reporting bugs, include:
|
||||||
|
Steps to reproduce
|
||||||
|
- Error logs and screenshots (if applicable)
|
||||||
|
- Your app.yaml configuration (if relevant)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Developer Details
|
||||||
|
Coding Guidelines:
|
||||||
|
Follow PEP 8 for Python code style.
|
||||||
|
Include type annotations for all public functions and methods.
|
||||||
|
Write clear docstrings for modules, classes, and functions.
|
||||||
|
Use descriptive commit messages and reference issue numbers when relevant.
|
||||||
|
Contributing
|
||||||
|
All contributions must be reviewed via pull requests.
|
||||||
|
Run all tests and ensure code passes lint checks before submission.
|
||||||
|
See CONTRIBUTING.md for more details (if present).
|
Loading…
Reference in New Issue
Block a user