MiniMax M2.7 icon

MiniMax M2.7

NVIDIA
MiniMax M2.7 is a large-scale Mixture-of-Experts (MoE) language model developed by MiniMax for coding, agentic workflows, and enterprise automation tasks. It features a 229B parameter architecture with ~10B active parameters, built on a 62-layer transformer with 48 attention heads and a 3,072 hidden size. It employs a 256-expert MoE architecture with 8 experts per token and supports up to a 200K context window. It is optimized for autonomous agent teams, iterative self-improvement, and complex real-world task execution.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date18 March, 2026
Links
LicenseModified MIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -e VLLM_FLOAT32_MATMUL_PRECISION=high 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:v0.18.1 
 MiniMaxAI/MiniMax-M2.7 
  --tensor-parallel-size 8 
 --enable-expert-parallel 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --tool-call-parser minimax_m2 
 --reasoning-parser minimax_m2_append_think 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
  --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU