Skip to content
OpenCatalogcurated by FLOSSK

Browse & filter

Filter by platform, license text, maturity, maintenance cadence, and editorial tags like privacy-focused or self-hosted. Search matches names, summaries, tags, and use cases.

3 tools match your filters

Cross-platform inference accelerator for ONNX models: CPU, GPU, and mobile execution providers with graph optimizations.

inferenceonnxdeploymentoptimization

Unified model serving and deployment toolkit: package models as APIs, ship to Kubernetes, and manage runtimes.

servingdeploymentmlopsapi

Retargetable MLIR-based compiler and runtime to lower ML graphs to CPUs, GPUs, and accelerators from multiple frontends.

compilermlirruntimedeploymenttaaft-repositories