GLM 4.6 icon

GLM 4.6

NVIDIA
GLM 4.6 is a Mixture-of-Experts (MoE) large language model developed by Z.ai designed for advanced coding, reasoning, and agentic workflows. The model features a 357B parameter architecture built on a 92-layer transformer with 96 attention heads and a 5,120 hidden size, utilizing 160 routed experts with 8 experts activated per token for efficient scaling. It supports a ~200K token context window, enabling long-horizon reasoning and complex multi-step agent interactions.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date30 September, 2025
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e VLLM_FLOAT32_MATMUL_PRECISION=high 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 zai-org/GLM-4.6 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --gpu-memory-utilization 0.90 
 --max-num-batched-tokens 65536 
 --tool-call-parser glm45 
 --reasoning-parser glm45 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU