Installation
This guide covers all the ways you can install and set up Axolotl for your environment.
1 Requirements
- NVIDIA GPU (Ampere architecture or newer for
bf16
and Flash Attention) or AMD GPU - Python ≥3.10
- PyTorch ≥2.5.1
2 Installation Methods
Please make sure to have Pytorch installed before installing Axolotl in your local environment.
Follow the instructions at: https://pytorch.org/get-started/locally/
For Blackwell GPUs, please use Pytorch 2.7.0 and CUDA 12.8.
2.1 PyPI Installation (Recommended)
pip3 install -U packaging setuptools wheel ninja
pip3 install --no-build-isolation axolotl[flash-attn,deepspeed]
We use --no-build-isolation
in order to detect the installed PyTorch version (if
installed) in order not to clobber it, and so that we set the correct version of
dependencies that are specific to the PyTorch version or other installed
co-dependencies.
2.2 uv Installation
uv is a fast, reliable Python package installer and resolver built in Rust. It offers significant performance improvements over pip and provides better dependency resolution, making it an excellent choice for complex environments.
Install uv if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/env
Choose your CUDA version to use with PyTorch; e.g. cu124
, cu126
, cu128
,
then create the venv and activate
export UV_TORCH_BACKEND=cu126
uv venv --no-project --relocatable
source .venv/bin/activate
Install PyTorch - PyTorch 2.6.0 recommended
uv pip install packaging setuptools wheel
uv pip install torch==2.6.0
uv pip install awscli pydantic
Install axolotl from PyPi
uv pip install --no-build-isolation axolotl[deepspeed,flash-attn]
# optionally install with vLLM if you're using torch==2.6.0 and want to train w/ GRPO
uv pip install --no-build-isolation axolotl[deepspeed,flash-attn,vllm]
2.3 Edge/Development Build
For the latest features between releases:
git clone https://github.com/axolotl-ai-cloud/axolotl.git
cd axolotl
pip3 install -U packaging setuptools wheel ninja
pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'
2.4 Docker
docker run --gpus '"all"' --rm -it axolotlai/axolotl:main-latest
For development with Docker:
docker compose up -d
docker run --privileged --gpus '"all"' --shm-size 10g --rm -it \
--name axolotl --ipc=host \
--ulimit memlock=-1 --ulimit stack=67108864 \
--mount type=bind,src="${PWD}",target=/workspace/axolotl \
-v ${HOME}/.cache/huggingface:/root/.cache/huggingface \
axolotlai/axolotl:main-latest
For Blackwell GPUs, please use axolotlai/axolotl:main-py3.11-cu128-2.7.0
or the cloud variant axolotlai/axolotl-cloud:main-py3.11-cu128-2.7.0
.
Please refer to the Docker documentation for more information on the different Docker images that are available.
3 Cloud Environments
3.1 Cloud GPU Providers
For providers supporting Docker:
- Use
axolotlai/axolotl-cloud:main-latest
- Available on:
3.2 Google Colab
Use our example notebook.
4 Platform-Specific Instructions
4.1 macOS
pip3 install --no-build-isolation -e '.'
See Section 6 for Mac-specific issues.
4.2 Windows
We recommend using WSL2 (Windows Subsystem for Linux) or Docker.
5 Environment Managers
5.1 Conda/Pip venv
Install Python ≥3.10
Install PyTorch: https://pytorch.org/get-started/locally/
Install Axolotl:
pip3 install -U packaging setuptools wheel ninja pip3 install --no-build-isolation -e '.[flash-attn,deepspeed]'
(Optional) Login to Hugging Face:
huggingface-cli login
6 Troubleshooting
If you encounter installation issues, see our FAQ and Debugging Guide.