GLM 4.7 icon

GLM 4.7

NVIDIA
GLM 4.7 is a Mixture-of-Experts (MoE) large language model developed by Zhipu AI, designed for advanced coding, reasoning, and agent-driven development workflows. The model features 358B total parameters, built on a 92-layer transformer architecture with 96 attention heads and a 5,120 hidden size, utilizing 160 routed experts with 8 experts activated per token to efficiently scale performance for complex tasks. It supports ~202K token context window and introduces Interleaved Thinking, Preserved Thinking, and Turn-level Thinking capabilities for more stable and controllable complex task execution.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date22 December, 2025
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e VLLM_FLOAT32_MATMUL_PRECISION=high 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 zai-org/GLM-4.7 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --gpu-memory-utilization 0.95 
 --max-num-batched-tokens 65536 
 --tool-call-parser glm47 
 --reasoning-parser glm45 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU