Qwen3.5 397B A17B icon

Qwen3.5 397B A17B

NVIDIA
Qwen3.5-397B-A17B is a multimodal Mixture-of-Experts (MoE) large language model optimized for long-context reasoning, vision-language understanding, and scalable agentic workflows. It contains 397B total parameters with 17B active parameters, activating 10 routed experts plus 1 shared expert from 512 experts per token. The model employs a 60-layer hybrid architecture combining Gated DeltaNet and full attention layers, with 32 attention heads and a 4,096 hidden size. Supporting a 262K token context window, extendable to ~1M, it integrates a 27-layer vision encoder for unified multimodal processing.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Group Release DateFebruary 15, 2026
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.9 
 python3 -m sglang.launch_server 
 --model-path Qwen/Qwen3.5-397B-A17B 
 --host 0.0.0.0 
 --port 8000 
 --max-running-requests 1024 
 --max-prefill-tokens 65536 
 --tp 8 
 --ep 8 
 --tool-call-parser qwen3_coder 
 --reasoning-parser qwen3 
 --mem-fraction-static 0.95 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy Qwen3.5 397B A17B on NVIDIA GPUs | Vultr Docs