qmb-mvp (0.2.0)
Installation
pip install --index-url /api/packages/qoherent/pypi/simple/ --no-deps qmb-mvpAbout this package
Add your description here
qmb-mvp
qmb-mvp is a command-line toolkit for RF model training, evaluation, export, hyperparameter search, and pruning. The repository now uses one YAML-driven runtime rooted in src/qmb; the old Hydra/Prefect stack has been removed.
Install
uv venv --python 3.13
uv pip install torch torchvision
uv sync --inexact
For CPU-only installs:
UV_TORCH_BACKEND=cpu uv pip install torch torchvision
UV_TORCH_BACKEND=cpu uv sync --inexact
CLI
qmb --help
qmb train --config path/to/train.yaml
qmb eval path/to/checkpoints/best.ckpt --split test
qmb export path/to/checkpoints/best.ckpt
qmb hpo --config path/to/hpo.yaml
qmb prune path/to/checkpoints/best.ckpt --config path/to/prune.yaml
Training writes timestamped runs under outputs/ by default with logs, checkpoints, resolved config snapshots, metrics, evaluation artifacts, and optional ONNX exports.
Config Shape
Training configs are plain YAML mappings with these top-level sections:
taskdatamodeloptimizationruntimeevaluationexport
Common features supported by the current runtime include:
- IQ HDF5 datasets for legacy, structured, Curator, DeepSig 2018, and auto-detection modes
- Split-specific transform pipelines
- Torch and timm optimizers and schedulers
- IQ model families including Tiny CNN, LeNet, VTCNN2, ResNet1D, MobileNetV3, EfficientNetV2, and timm-backed IQ classifiers
- Resume, early stopping, gradient clipping,
torch.compile, ONNX export, Optuna HPO, and structured pruning
See the example configs under documentation/v2/examples for working YAML layouts.
Built-In Tasks
classificationis the only built-in runnable task.regressionis a stub and should be implemented as a custom extension task.
Extension Authoring
The primary supported customization workflow is a copyable extension folder whose Python files
import only from the public qmb.api* surface. Start from the reference folders under
documentation/v2/examples/extensions/:
minimal_task_extension/for a custom task that reuses built-in qmb datasets, models, and lossesfull_task_extension/for a custom task plus custom dataset, model, and dataset config resolver
Those folders can live on the same machine running qmb, or in a repository that the runner
clones before execution. The runtime then loads the extension through:
runtime:
component_modules:
- /path/to/my_qmb_extension
runtime.component_modules supports two forms:
- package directories containing
__init__.py - direct
.pymodule paths
Relative filesystem inputs in qmb configs are resolved from the config file location. That lets
examples/train.yaml use component_modules: [".."] to load the parent extension directory
without depending on the caller's working directory.
Resolved input paths are persisted into run artifacts and checkpoints, so later eval, export,
and prune commands remain portable across working directories.
qmb.api.scaffold remains available for internal tooling that needs to materialize the packaged
template assets, but it is not required for the primary user workflow.
Layout
src/qmb/
|-- artifacts/
|-- api/
|-- cli/
|-- compression/
|-- core/
|-- data/
|-- datasets/
|-- engine/
|-- evaluation/
|-- export/
|-- losses/
|-- models/
|-- tasks/
`-- transforms/
custom_components/ is now a repo-local/internal example area. It is useful for developing and
testing project-specific tasks inside this repository, but it is not the primary user
workflow and is not shipped in package builds.
Testing
uv sync --group dev
uv run --group dev pytest