GLM 4.7 Flash icon

GLM 4.7 Flash

NVIDIA
GLM 4.7 Flash is a lightweight Mixture-of-Experts (MoE) transformer language model, designed as an efficient variant of GLM-4.7, providing strong coding, reasoning, and agentic capabilities for lightweight deployment. The model features 30B total parameters with ~3B active parameters, built on a 47-layer transformer with 20 attention heads and a 2,048 hidden size. It employs 64 routed experts with 4 experts activated per token, scaling performance for complex tasks. The model supports a ~202K token context window, enabling long-horizon agent interactions and multi-step reasoning.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Group Release DateDecember 21, 2025
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e VLLM_FLOAT32_MATMUL_PRECISION=high 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:glm5 
 zai-org/GLM-4.7-Flash 
  --tensor-parallel-size 4 
 --max-model-len auto 
  --gpu-memory-utilization 0.95 
 --max-num-batched-tokens 65536 
 --tool-call-parser glm47 
 --reasoning-parser glm45 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code
Note

The model architecture is supported in the vllm/vllm-openai:glm5 image or more recent versions; due to the 20 attention heads and 20 KV heads, the maximum supported Tensor Parallelism is 4.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU