| Type | MoE LLM |
| Capabilities | Text Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more |
| Links | |
| License | Modified MIT |
Inference Instructions
Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.
CONSOLE
docker run -it --rm --runtime=nvidia --gpus all --ipc=host --shm-size=128g -p 8000:8000 -e VLLM_MOE_USE_DEEP_GEMM=0 -e VLLM_USE_FLASHINFER_MOE_FP8=0 -e VLLM_FLOAT32_MATMUL_PRECISION=high -v ~/.cache/huggingface:/root/.cache/huggingface -e HF_TOKEN='YOUR_HF_TOKEN' -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' vllm/vllm-openai:v0.15.0-cu130 MiniMaxAI/MiniMax-M2.1 --tensor-parallel-size 8 --enable-expert-parallel --max-model-len auto --max-num-batched-tokens 65536 --gpu-memory-utilization 0.95 --tool-call-parser minimax_m2 --reasoning-parser minimax_m2_append_think --enable-auto-tool-choice --max-num-seqs 1024 --disable-log-requests --trust-remote-code
Note
Enable expert parallelism for TP=8 deployments; set VLLM_USE_FLASHINFER_MOE_FP8=0 to bypass B200 FP8 MoE errors.
Model Benchmarks
Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.
ITL vs Concurrency
Time to First Token
Throughput Scaling
Total Tokens/sec vs Avg TTFT
Vultr Cloud GPU
NVIDIA HGX B200
Deploy NVIDIA B200 on Vultr Cloud GPU
