DeepSeek V3.2 Exp icon

DeepSeek V3.2 Exp

NVIDIA
DeepSeek V3.2 Exp is an experimental Mixture-of-Experts (MoE) large language model developed by DeepSeek AI as an intermediate research step toward its next-generation architecture. It builds upon the V3.1-Terminus design while introducing DeepSeek Sparse Attention (DSA). The model features a 685B parameter architecture with ~37B activated parameters, built on a 61-layer transformer with 128 attention heads and a 7,168 hidden size, utilizing 256 routed experts with 8 experts activated per token. It supports a ~160K token context window using YaRN-based rotary scaling.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date29 September, 2025
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path deepseek-ai/DeepSeek-V3.2-Exp 
 --host 0.0.0.0 
 --port 8000 
 --max-running-requests 1024 \ --attention-backend nsa 
 --nsa-prefill-backend flashmla_auto 
 --nsa-decode-backend flashmla_kv 
 --kv-cache-dtype fp8_e4m3 
 --chat-template ./examples/chat_template/tool_chat_template_deepseekv32.jinja 
 --tp 8 
 --tool-call-parser deepseekv31 
 --reasoning-parser deepseek-v3 
 --mem-fraction-static 0.90 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU