| Type | MoE LLM |
| Capabilities | Text Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more |
| Group Release Date | April 6, 2026 |
| Links | |
| License | MIT |
Inference Instructions
Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.
CONSOLE
docker run -it --rm --runtime=nvidia --gpus all --ipc=host --shm-size=128g -p 8000:8000 -v ~/.cache/huggingface:/root/.cache/huggingface -e HF_TOKEN='YOUR_HF_TOKEN' vllm/vllm-openai:glm51-cu130 zai-org/GLM-5.1-FP8 --tensor-parallel-size 8 --max-model-len auto --gpu-memory-utilization 0.90 --tool-call-parser glm47 --reasoning-parser glm45 --enable-auto-tool-choice --max-num-seqs 1024 --trust-remote-code
Note
For serving this model, it is recommended to use the vllm/vllm-openai:glm51-cu130 docker image for CUDA 13.0+ or an image with vLLM v0.19.0+
Model Benchmarks
Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.
ITL vs Concurrency
Time to First Token
Throughput Scaling
Total Tokens/sec vs Avg TTFT
Vultr Cloud GPU
NVIDIA HGX B200
Deploy NVIDIA B200 on Vultr Cloud GPU
