Qwen3 235B A22B Thinking 2507 icon

Qwen3 235B A22B Thinking 2507

NVIDIA
The Qwen3-235B-A22B-Thinking-2507 is the premier flagship, reasoning-optimized Mixture-of-Experts (MoE) Causal Language Model in the Qwen3 family, built with 235B total parameters and 22B activated per forward pass, designed for advanced analytical and research-grade workloads. This checkpoint operates exclusively in thinking mode and is trained to generate explicit reasoning traces during inference, enabling deeper multi-step problem solving across mathematics, science, coding, and academic benchmarks. It supports a native 262,144-token context window, allowing sustained reasoning over long documents, complex datasets, and extended multi-source inputs without context degradation.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release DateApril 28, 2025
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -e VLLM_USE_DEEP_GEMM=1 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e VLLM_USE_FLASHINFER_MOE_FP16=1 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 Qwen/Qwen3-235B-A22B-Thinking-2507 
  --tensor-parallel-size 8 
 --enable-expert-parallel 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU