MiniMax M2.1 icon

MiniMax M2.1

NVIDIA
MiniMax M2.1 is a large-scale Mixture-of-Experts (MoE) language model developed by MiniMax focused on real-world agent reliability. It features a 229B parameter architecture with ~10B active parameters, built on a 62-layer transformer with 48 attention heads and a 3,072 hidden size, utilizing 256 experts with 8 experts activated per token. It supports a 200K token context window, making it well suited for long-horizon workflows, composite instructions, and stable performance across agent frameworks and production environments.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date23 December, 2025
Links
LicenseModified MIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -e VLLM_MOE_USE_DEEP_GEMM=0 
 -e VLLM_USE_FLASHINFER_MOE_FP8=0 
 -e VLLM_FLOAT32_MATMUL_PRECISION=high 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 MiniMaxAI/MiniMax-M2.1 
  --tensor-parallel-size 8 
 --enable-expert-parallel 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --tool-call-parser minimax_m2 
 --reasoning-parser minimax_m2_append_think 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
  --disable-log-requests 
  --trust-remote-code
Note

Enable expert parallelism for TP=8 deployments; set VLLM_USE_FLASHINFER_MOE_FP8=0 to bypass B200 FP8 MoE errors.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU