Qwen3.5 9B icon

Qwen3.5 9B

NVIDIA
Qwen3.5-9B is a multimodal causal language model built with a 9B parameter architecture. It uses a 32-layer transformer with 16 attention heads, 4 KV heads, and a 4,096 hidden size, paired with a 12,288 intermediate dimension. The model supports a native 262K token context window, extendable to ~1M tokens, and includes a 27-layer vision encoder with 1,152 hidden size. It combines Gated DeltaNet and attention layers, delivering efficient long-context reasoning, strong multimodal performance, and broad multilingual coverage across 200+ languages.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Paper/Blog
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:v0.17.1 
 Qwen/Qwen3.5-9B 
  --tensor-parallel-size 8 
 --mm-encoder-tp-mode data 
 --max-model-len auto 
 --max-num-batched-tokens 65536 
 --gpu-memory-utilization 0.95 
 --tool-call-parser qwen3_coder 
 --reasoning-parser qwen3 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
 --trust-remote-code
Note

Model is served in multimodal mode by default; to restrict the engine to text-only processing, use the --language-model-only flag.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy Qwen3.5 9B on NVIDIA GPUs | Vultr Docs