Skip to content
OpenCatalogcurated by FLOSSK
AI & Machine Learning

MLX LM

Apple MLX-based LLM inference and training on Apple silicon: efficient Metal-backed transformers and examples for local chat models.

Why it is included

Primary open path for serious on-device LLM work on macOS without CUDA.

Best for

Mac studios and laptops running quantized open models locally.

Strengths

  • Metal performance
  • Simple Python API
  • Apple-maintained

Limitations

  • Apple-only; model coverage follows community ports

Good alternatives

llama.cpp · Ollama

Related tools