MLX LM
Apple MLX-based LLM inference and training on Apple silicon: efficient Metal-backed transformers and examples for local chat models.
Why it is included
Primary open path for serious on-device LLM work on macOS without CUDA.
Best for
Mac studios and laptops running quantized open models locally.
Strengths
- Metal performance
- Simple Python API
- Apple-maintained
Limitations
- Apple-only; model coverage follows community ports
Good alternatives
llama.cpp · Ollama
Related tools
AI & Machine Learning
llama.cpp
Plain C/C++ inference for LLaMA-class models with broad community backends.
AI & Machine Learning
Ollama
Local LLM runner and model library with simple CLI and API for workstation inference.
AI & Machine Learning
llamafile
Single-file distributable LLM weights + llama.cpp runtime: run large models from one executable with broad OS CPU/GPU support.
AI & Machine Learning
ExLlamaV2
Memory-efficient CUDA inference kernels for quantized Llama-class models—popular in consumer GPU chat UIs.
AI & Machine Learning
vLLM
High-throughput LLM serving with PagedAttention, continuous batching, and OpenAI-compatible APIs for GPU clusters.
AI & Machine Learning
SGLang
Structured generation language for fast serving: RadixAttention, constrained decoding, and multi-turn batching for frontier-class workloads.
