DeepSeek V3.2 icon

DeepSeek V3.2

NVIDIA
DeepSeek-V3.2 is a large Mixture-of-Experts (MoE) language model that balances high computational efficiency with exceptional reasoning and agent capabilities. The model features a 685B-class architecture with ~37B activated parameters, built on a 61-layer transformer with 128 attention heads and a 7,168 hidden size, utilizing 256 routed experts with 8 experts activated per token. It integrates DeepSeek Sparse Attention (DSA) to significantly reduce computational complexity while maintaining long-context performance, supporting up to a ~160K token context window.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path deepseek-ai/DeepSeek-V3.2 
 --host 0.0.0.0 
 --port 8000 
 --max-running-requests 1024 \ --attention-backend nsa 
 --nsa-prefill-backend flashmla_auto 
 --nsa-decode-backend flashmla_kv 
 --kv-cache-dtype fp8_e4m3 
 --chat-template ./examples/chat_template/tool_chat_template_deepseekv32.jinja 
 --tp 8 
 --tool-call-parser deepseekv32 
 --reasoning-parser deepseek-v3 
 --mem-fraction-static 0.90 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU