Qwen3 14B icon

Qwen3 14B

NVIDIA
The Qwen3-14B is a large-scale Causal Language Model built on a 40-layer architecture with 13.2B non-embedding parameters, designed for workloads that require deeper reasoning capacity and higher generation fidelity across complex tasks. It supports a native 32,768-token context window and can be extended up to 131,072 tokens using YaRN, enabling effective processing of long-form documents, extended analytical prompts, and multi-stage conversational workflows. Leveraging Qwen3’s hybrid thinking design, the model transitions smoothly between deliberate, multi-step reasoning and efficient general-purpose generation.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+4 more
Group Release DateApril 28, 2025
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=64g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 Qwen/Qwen3-14B 
  --tensor-parallel-size 8 
  --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --max-num-seqs 1024 
  --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU