MiniMax M2 icon

MiniMax M2

NVIDIA
MiniMax-M2 is a large-scale Mixture-of-Experts (MoE) language model developed by MiniMaxAI optimized for coding and agentic workflows. It features a 229B parameter architecture with ~10B active parameters, using 256 routed experts with 8 experts activated per token. The model is built on a 62-layer transformer with a 3072 hidden size, 48 attention heads (8 KV heads), and supports a 200K token context window. It is designed for efficient, low-latency deployment, excelling in multi-step coding tasks, tool use, and long-horizon agent execution.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Links
LicenseModified MIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e SGLANG_ENABLE_DEEPGEMM_UE8M0=false 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path MiniMaxAI/MiniMax-M2 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --attention-backend flashinfer 
 --max-running-requests 1024 
 --reasoning-parser minimax-append-think 
 --tool-call-parser minimax-m2 
 --tp 8 
 --ep 8 
 --mem-fraction-static 0.95 
 --trust-remote-code
Note

Enable expert parallelism for TP=8 deployments.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU