Step 3.5 Flash icon

Step 3.5 Flash

NVIDIA
Step 3.5 Flash is a sparse Mixture-of-Experts (MoE) language model built for fast reasoning, coding, and agentic workflows. It has 196B total parameters with about 11B activated per token, using a 45-layer transformer with 4,096 hidden size and 64 attention heads. The architecture includes 288 experts per layer with top-8 routing and a shared expert, along with a hybrid attention pattern combining sliding window and full attention in a 3:1 ratio. It supports a 256K context window and integrates Multi-Token Prediction for high-throughput generation.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date02 February, 2026
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:v0.18.0 
 stepfun-ai/Step-3.5-Flash 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
 --disable-cascade-attn 
 --reasoning-parser step3p5 
 --tool-call-parser step3p5 
 --enable-auto-tool-choice 
 --gpu-memory-utilization 0.95 
 --max-num-seqs 1024 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU