Skip to content
OpenCatalogcurated by FLOSSK

Browse & filter

Filter by platform, license text, maturity, maintenance cadence, and editorial tags like privacy-focused or self-hosted. Search matches names, summaries, tags, and use cases.

5 tools match your filters

Optimized fine-tuning library claiming 2× faster LoRA/QLoRA with less VRAM via custom kernels and Hugging Face compatibility.

llmfine-tuningloratrainingoptimization

Cross-platform inference accelerator for ONNX models: CPU, GPU, and mobile execution providers with graph optimizations.

inferenceonnxdeploymentoptimization

Intel toolkit to optimize and deploy deep learning on Intel CPUs, GPUs, and NPUs with model conversion and runtime APIs.

inferenceinteledgeoptimization

Automatic hyperparameter optimization framework with pruning, distributed search, and lightweight integration hooks.

hyperparameter-tuningautomlpythonoptimization

CTranslate2 reimplementation of Whisper for faster CPU/GPU inference with lower memory use than reference PyTorch.

speechasrinferenceoptimization