Qwen3 32B icon

Qwen3 32B

NVIDIA
The Qwen3 32B is the largest dense Causal Language Model in the Qwen3 family, built on a 64-layer architecture with 31.2B non-embedding parameters to support high-complexity reasoning and high-fidelity generation at scale. It provides a native 32,768-token context window and can be extended up to 131,072 tokens using YaRN, enabling robust handling of long-form documents, extended analytical chains, and deeply nested multi-turn interactions. Incorporating Qwen3’s hybrid thinking design, the model balances deliberate multi-step reasoning with efficient general-purpose generation.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+4 more
Release Date29 April, 2025
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 64g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path Qwen/Qwen3-32B 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --max-running-requests 1024 
 --enable-piecewise-cuda-graph 
 --tp 8 
 --mem-fraction-static 0.95 
 --attention-backend flashinfer 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU