GLM 5.1 FP8 icon

GLM 5.1 FP8

NVIDIA
GLM 5.1 is a large-scale Mixture-of-Experts (MoE) language model and a successor to GLM-5 with improved capabilities and performance, designed for long-context reasoning, coding, and agentic workflows. The model has 744B total parameters with 40B activated, using a 78-layer transformer with 6,144 hidden size and 64 attention heads. It routes tokens across 256 experts with top-8 selection for efficient sparse computation. The model supports up to ~202K context length with RoPE scaling and uses FP8 (e4m3) quantization to reduce memory and improve throughput while maintaining strong multi-step reasoning performance.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Group Release DateApril 6, 2026
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:glm51-cu130 
 zai-org/GLM-5.1-FP8 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --gpu-memory-utilization 0.90 
 --tool-call-parser glm47 
 --reasoning-parser glm45 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
 --trust-remote-code
Note

For serving this model, it is recommended to use the vllm/vllm-openai:glm51-cu130 docker image for CUDA 13.0+ or an image with vLLM v0.19.0+

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU