| Type | MoE LLM |
| Capabilities | Text Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more |
| Release Date | 28 April, 2026 |
| Links | |
| License | MIT |
Inference Instructions
Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.
CONSOLE
docker run --gpus all --shm-size 128g -p 8000:8000 -v ~/.cache/huggingface:/root/.cache/huggingface -e HF_TOKEN='YOUR_HF_TOKEN' --ipc=host lmsysorg/sglang:dev-cu13-mimo-v2.5-pro python3 -m sglang.launch_server --model-path XiaomiMiMo/MiMo-V2.5-Pro --host 0.0.0.0 --port 8000 --moe-runner-backend flashinfer_trtllm --attention-backend fa4 --tool-call-parser mimo --reasoning-parser mimo --mm-attention-backend triton_attn --max-running-requests 1024 --tp 8 --ep 8 --mem-fraction-static 0.90 --trust-remote-code
Note
Use lmsysorg/sglang:dev-cu13-mimo-v2.5-pro for CUDA 13, lmsysorg/sglang:dev-mimo-v2.5-pro for CUDA 12.9, or any subsequent release for MiMo V2.5 Pro support.
Model Benchmarks
Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.
ITL vs Concurrency
Time to First Token
Throughput Scaling
Total Tokens/sec vs Avg TTFT
Vultr Cloud GPU
NVIDIA HGX B200
Deploy NVIDIA B200 on Vultr Cloud GPU
