Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.radixark.com/llms.txt

Use this file to discover all available pages before exploring further.

There are three ways to install Miles. Docker is recommended because Miles pins patched versions of SGLang, Megatron-LM, and a few CUDA kernels.
docker pull radixark/miles:latest

docker run --rm \
  --gpus all --ipc=host --shm-size=32g \
  --ulimit memlock=-1 --ulimit stack=67108864 \
  --network=host \
  -it radixark/miles:latest /bin/bash
The image ships with:
  • PyTorch (matching the container’s CUDA / ROCm version)
  • Megatron-LM, SGLang, FlashAttention-3, DeepGEMM, Apex
  • Ray, uv, and Miles installed editable at /root/miles
See Platforms for platform-specific notes.

Method 2: From source

Clone and install in an existing environment.
git clone https://github.com/radixark/miles.git
cd miles
pip install -r requirements.txt
pip install -e . --no-deps
Patched dependencies. Miles pins patched versions of SGLang and Megatron-LM. Installing them yourself at the wrong commit is the most common source of bug reports — use Docker if you can.

Method 3: Update an existing container

If you already run a Miles image and want the latest code:
cd /root/miles
git pull --rebase
pip install -e . --no-deps
ray stop && ray start --head --port=6379

Verify

Confirm Miles imports and the GPUs are visible:
python -c "import miles; print('Miles import OK')"
nvidia-smi
If either command fails, see Debugging or the FAQ.

Hardware requirements

HardwareStatus
NVIDIA H100 / H200Production (CI guarded)
NVIDIA B100 / B200Production
NVIDIA A100Supported — FP8 features disabled
AMD MI300X, MI325, MI350X, MI355XSupported via ROCm
For multi-node training you also need a high-bandwidth interconnect — InfiniBand, RoCEv2, or Slingshot — and 200+ GB/s per node. Single-node jobs run fine over NVLink only.

Next steps