π Latest Updates
- 2025/06: Magistral with mistral-common tokenizer support has been added to Axolotl. See examples to start training your own Magistral models with Axolotl!
- 2025/05: Quantization Aware Training (QAT) support has been added to Axolotl. Explore the docs to learn more!
- 2025/04: Llama 4 support has been added in Axolotl. See examples to start training your own Llama 4 models with Axolotlβs linearized version!
- 2025/03: Axolotl has implemented Sequence Parallelism (SP) support. Read the blog and docs to learn how to scale your context length when fine-tuning.
- 2025/03: (Beta) Fine-tuning Multimodal models is now supported in Axolotl. Check out the docs to fine-tune your own!
- 2025/02: Axolotl has added LoRA optimizations to reduce memory usage and improve training speed for LoRA and QLoRA in single GPU and multi-GPU training (DDP and DeepSpeed). Jump into the docs to give it a try.
- 2025/02: Axolotl has added GRPO support. Dive into our blog and GRPO example and have some fun!
- 2025/01: Axolotl has added Reward Modelling / Process Reward Modelling fine-tuning support. See docs.
β¨ Overview
Axolotl is a tool designed to streamline post-training for various AI models.
Features:
- Multiple Model Support: Train various models like LLaMA, Mistral, Mixtral, Pythia, and more. We are compatible with HuggingFace transformers causal language models.
- Training Methods: Full fine-tuning, LoRA, QLoRA, GPTQ, QAT, Preference Tuning (DPO, IPO, KTO, ORPO), RL (GRPO), Multimodal, and Reward Modelling (RM) / Process Reward Modelling (PRM).
- Easy Configuration: Re-use a single YAML file between dataset preprocess, training, evaluation, quantization, and inference.
- Performance Optimizations: Multipacking, Flash Attention, Xformers, Flex Attention, Liger Kernel, Cut Cross Entropy, Sequence Parallelism (SP), LoRA optimizations, Multi-GPU training (FSDP1, FSDP2, DeepSpeed), Multi-node training (Torchrun, Ray), and many more!
- Flexible Dataset Handling: Load from local, HuggingFace, and cloud (S3, Azure, GCP, OCI) datasets.
- Cloud Ready: We ship Docker images and also PyPI packages for use on cloud platforms and local hardware.
π Quick Start
Requirements:
- NVIDIA GPU (Ampere or newer for
bf16
and Flash Attention) or AMD GPU - Python 3.11
- PyTorch β₯2.5.1
Installation
pip3 install -U packaging==23.2 setuptools==75.8.0 wheel ninja
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
# Download example axolotl configs, deepspeed configs
axolotl fetch examples
axolotl fetch deepspeed_configs # OPTIONAL
Other installation approaches are described here.
Your First Fine-tune
# Fetch axolotl examples
axolotl fetch examples
# Or, specify a custom path
axolotl fetch examples --dest path/to/folder
# Train a model using LoRA
axolotl train examples/llama-3/lora-1b.yml
Thatβs it! Check out our Getting Started Guide for a more detailed walkthrough.
π Documentation
- Installation Options - Detailed setup instructions for different environments
- Configuration Guide - Full configuration options and examples
- Dataset Loading - Loading datasets from various sources
- Dataset Guide - Supported formats and how to use them
- Multi-GPU Training
- Multi-Node Training
- Multipacking
- API Reference - Auto-generated code documentation
- FAQ - Frequently asked questions
π€ Getting Help
- Join our Discord community for support
- Check out our Examples directory
- Read our Debugging Guide
- Need dedicated support? Please contact βοΈwing@axolotl.ai for options
π Contributing
Contributions are welcome! Please see our Contributing Guide for details.
β€οΈ Sponsors
Thank you to our sponsors who help make Axolotl possible:
- Modal - Modal lets you run jobs in the cloud, by just writing a few lines of Python. Customers use Modal to deploy Gen AI models at large scale, fine-tune large language models, run protein folding simulations, and much more.
Interested in sponsoring? Contact us at wing@axolotl.ai
π License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.