Skip to content
OpenCatalogcurated by FLOSSK
AI & Machine Learning

PEFT

Parameter-efficient fine-tuning methods (LoRA, adapters, prompt tuning) integrated with Transformers models.

Why it is included

Standard OSS layer for affordable LLM adaptation on consumer GPUs inside the HF stack.

Best for

Fine-tuning teams using LoRA/QLoRA without rewriting low-level kernels.

Strengths

  • Method breadth
  • Transformers integration
  • Active maintenance

Limitations

  • Still subject to base model license terms

Good alternatives

Unsloth · Axolotl · LLaMA Factory

Related tools