Qwen3.5 0.8B icon

Qwen3.5 0.8B

NVIDIA
Qwen3.5 0.8B is a compact multimodal language model combining a causal transformer with a vision encoder for efficient cross-modal reasoning. It features a 0.8B parameter architecture with 24 layers, 1024 hidden size, 8 attention heads (2 KV heads), and integrates Gated DeltaNet linear attention with periodic full attention layers. The model supports a 262K token context window and uses multi-token prediction (MTP). It is designed for efficient multimodal understanding, strong reasoning, and broad multilingual support across 200+ languages with low-latency inference.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Release Date02 March, 2026
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.9 
 python3 -m sglang.launch_server 
 --model-path Qwen/Qwen3.5-0.8B 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --max-running-requests 1024 
 --tp 4 
 --tool-call-parser qwen3_coder 
 --reasoning-parser qwen3 
 --mem-fraction-static 0.95 
 --trust-remote-code
Note

Multimodal serving is constrained to a maximum of tp=4; for higher parallelism (e.g., tp=8) or text-only inference, the --language-only flag must be enabled.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU