Go to file
2025-06-18 13:44:29 -04:00
.riahub/workflows fixed paths, removed zip file step 2025-06-18 10:23:11 -04:00
conf commented functions, added in type defintions 2025-06-17 14:16:16 -04:00
helpers added weak typing to utils 2025-06-18 10:02:36 -04:00
scripts fixed plot.py 2025-06-18 13:44:29 -04:00
.gitignore fixed paths 2025-06-13 15:03:42 -04:00
README.md deletdd files, updated workflow 2025-06-13 13:58:35 -04:00
requirements.txt fixed installation packages 2025-06-18 10:11:56 -04:00

ModrecWorkflow Demo

This project automates the process of generating data, training, and deploying the modulation recognition model for radio singal classification. The workflow is intended to support experimentation, reproducibility, and deployment of machine learning models for wireless signal modulation classification, such as QPSK, 16-QAM, BPSK,

Getting Started

  1. Clone the Repository
git clone https://github.com/yourorg/modrec-workflow.git
cd modrec-workflow
  1. Configure the Workflow

All workflow parameters (data paths, model architecture, training settings) are set in 'conf/app.yaml'

Example:

dataset:
  input_dir: data/recordings
  num_slices: 8
  train_split: 0.8
  val_split : 0.2
  1. Run the Pipeline Once you update the changes to app.yaml, you can make any push or pull to your repo to start running the workflow

Artifacts Created

After Successful execution, the workflow produces serveral artifacts in the output

  • dataset
    • This is a folder containing to .h5 datasets called train and val
  • Checkpoints
    • Contains saved model checkpoints, each checkpoint includes the models learned weights at various stages of training
  • ONNX File
    • The ONNX file contains the trained model in a standardized format that allows it to be run efficiently across different platforms and deployment environments.
  • JSON Trace File (*json)
    • Captures a full trace of model training and inference perfomance for profiling and debugging
    • Useful for identifying performance bottlenecks, optimizing resource usage, and tracking metadata
  • ORT File (*ort)
    • This is an ONNX Runtime (ORT) model file, optimized for fast inference on various platforms
    • Why is it Useful?
      • You can deploy this file on edge devices, servers or integrate it into the production systems for real-time signal classification
      • ORT files are class-platform and allow easy inference acceleration using ONNX Runtime

How to View the JSON Trace File

Access this link Click on Open Trace File -> Select your specific JSON trace file Explore detailed visualizations of performance metrics, timelines, and resource usage to diagnose bottlenecks and optimize your workflow.

Submiting Issues

Found a bug or have a feature request? Please submit an issue via the GitHub Issues page. When reporting bugs, include: Steps to reproduce

  • Error logs and screenshots (if applicable)
  • Your app.yaml configuration (if relevant)

Developer Details

Coding Guidelines: Follow PEP 8 for Python code style. Include type annotations for all public functions and methods. Write clear docstrings for modules, classes, and functions. Use descriptive commit messages and reference issue numbers when relevant. Contributing All contributions must be reviewed via pull requests. Run all tests and ensure code passes lint checks before submission.