Qwen3.5 4B icon

Qwen3.5 4B

NVIDIA
Qwen3.5-4B is a multimodal causal language model built with a 4B parameter architecture. It uses a 32-layer transformer with 16 attention heads, 4 KV heads, and a 2,560 hidden size, paired with a 9,216 intermediate dimension. The model supports a native 262K token context window, extendable beyond 1M tokens, and integrates a 24-layer vision encoder with 1,024 hidden size. It combines Gated DeltaNet and attention layers, enabling efficient long-context reasoning, strong multimodal understanding, and broad multilingual capability across 200+ languages.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Group Release DateFebruary 15, 2026
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.9 
 python3 -m sglang.launch_server 
 --model-path Qwen/Qwen3.5-4B 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --max-running-requests 1024 
 --tp 8 
 --tool-call-parser qwen3_coder 
 --reasoning-parser qwen3 
 --mem-fraction-static 0.95 
 --trust-remote-code
Note

Model is served in multimodal mode by default; to restrict the engine to text-only processing, use the --language-only flag.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU