Qwen3 4B icon

Qwen3 4B

NVIDIA
The Qwen3-4B is a mid-scale Causal Language Model built on a 36-layer architecture with 3.6B non-embedding parameters, positioned as a balanced option for workloads that require stronger reasoning depth without sacrificing serving efficiency. It supports a native 32,768-token context window and can be extended up to 131,072 tokens using YaRN, enabling reliable performance on long-form documents, multi-step analysis, and extended conversational workflows. Incorporating Qwen3’s hybrid thinking design, the model adapts smoothly between deliberate reasoning tasks and fast, general-purpose generation.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+4 more
Group Release DateApril 28, 2025
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 64g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path Qwen/Qwen3-4B 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 32768 
 --max-running-requests 1024 
 --enable-piecewise-cuda-graph 
 --tp 8 
 --mem-fraction-static 0.95 
 --attention-backend flashinfer 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU