Qwen3 30B A3B Thinking 2507 icon

Qwen3 30B A3B Thinking 2507

NVIDIA
The Qwen3-30B-A3B-Thinking-2507 is a reasoning-focused Mixture-of-Experts (MoE) Causal Language Model designed for high-complexity analytical workloads that require explicit, step-by-step deliberation. This model operates exclusively in thinking mode and is trained to expose its internal reasoning process during generation, making it well-suited for advanced tasks in mathematics, science, coding, and academic problem solving. It supports a native 262,144-token context window, enabling deep reasoning over long documents, large codebases, and multi-source inputs without losing coherence.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release DateApril 28, 2025
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -e VLLM_USE_DEEP_GEMM=1 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e VLLM_USE_FLASHINFER_MOE_FP16=1 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 Qwen/Qwen3-30B-A3B-Thinking-2507 
  --tensor-parallel-size 8 
 --enable-expert-parallel 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU