Skip to content
OpenCatalogcurated by FLOSSK
AI & Machine Learning

faster-whisper

CTranslate2 reimplementation of Whisper for faster CPU/GPU inference with lower memory use than reference PyTorch.

Why it is included

Production default for many self-hosted transcription pipelines using Whisper weights.

Best for

Serving or batch ASR where throughput and RAM matter more than research flexibility.

Strengths

  • Speed
  • Quantization
  • Drop-in style API

Limitations

  • Tracks Whisper releases; feature parity nuances per version

Good alternatives

Whisper · Whisper.cpp · mlx-whisper

Related tools