MiniMax M2.5 icon

MiniMax M2.5

NVIDIA
MiniMax M2.5 is a large-scale Mixture-of-Experts (MoE) language model developed by MiniMax for coding, agentic workflows, and enterprise automation tasks. It features a 229B parameter architecture with ~10B active parameters, built on a 62-layer transformer with 48 attention heads and a 3,072 hidden size, utilizing 256 experts with 8 experts activated per token. The model integrates Multi-Token Prediction (MTP) for faster generation and supports a 200K token context window, enabling efficient reasoning, tool interaction, and complex task decomposition in large-scale agentic systems.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date12 February, 2026
Links
LicenseModified MIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e SGLANG_ENABLE_DEEPGEMM_UE8M0=false 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path MiniMaxAI/MiniMax-M2.5 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --attention-backend flashinfer 
 --max-running-requests 1024 
 --reasoning-parser minimax-append-think 
 --tool-call-parser minimax-m2 
 --tp 8 
 --ep 8 
 --mem-fraction-static 0.95 
 --trust-remote-code
Note

Enable expert parallelism for TP=8 deployments.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU