OPT (Hugging Face)
Meta’s Open Pretrained Transformer suite (125M–175B) released with reproducible logbooks—canonical Hub org `facebook` / `facebook/opt-*`.
Why it is included
OPT-125M and siblings still rank highly in Hub downloads; foundational for open LLM scaling research.
Best for
Research reproduction, scaling laws coursework, and legacy fine-tune baselines.
Strengths
- Open logbooks
- Wide size ladder
- Strong citation footprint
Limitations
- Older architecture vs Llama-class; license varies by size
Good alternatives
GPT-NeoX · BLOOM · Llama
Related tools
AI & Machine Learning
GPT-NeoX
EleutherAI framework and 20B-class models for training large autoregressive LMs with 3D parallelism—Apache-2.0 training stack.
AI & Machine Learning
Meta Llama (open models)
Meta’s Llama family of open **weights** (subject to Llama license) with reference code, tooling, and downloads via Hugging Face and meta-llama org.
AI & Machine Learning
OpenAI gpt-oss (Hub)
OpenAI’s open-weight GPT-OSS checkpoints (e.g. 20B, 120B) hosted on Hugging Face for local inference and fine-tuning.
AI & Machine Learning
GPT-2 (Hugging Face)
Historic decoder-only LM family (124M–1.5B) under `openai-community` on the Hub—still a default tutorial and pipeline test target.
AI & Machine Learning
GLM-5 (Hugging Face)
Z.ai GLM-5–generation checkpoints (e.g. FP8 builds) distributed on the Hub for text generation and agent-style use cases.
AI & Machine Learning
Pythia (Hugging Face)
EleutherAI’s public scaling suite: matched GPT-NeoX–architecture models from 70M–12B with public datasets for interpretability research.
