Nemotron 3 Nano 30B A3B Base BF16 icon

Nemotron 3 Nano 30B A3B Base BF16

NVIDIA
Nemotron 3 Nano 30B A3B Base BF16 is a hybrid Mixture-of-Experts (MoE) large language model trained from scratch for adaptable large-scale language modeling. The model contains 30B total parameters with 3.5B active parameters, activating 6 experts per token across 128 routed experts and 1 shared expert. It features a 52-layer architecture combining MoE and Mamba-2 state space layers with grouped query attention, 32 attention heads, and a 2,688 hidden size. Supporting up to a 1M token context window, this base checkpoint is designed for fine-tuning, domain adaptation, and building specialized AI systems.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date15 December, 2025
Links
Licensenvidia-open-model-license

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e VLLM_FLOAT32_MATMUL_PRECISION=high 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-Base-BF16 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --max-num-seqs 1024 
 --disable-log-requests 
 --enable-auto-tool-choice 
 --tool-call-parser qwen3_coder 
 --reasoning-parser deepseek_r1 
  --kv-cache-dtype auto 
  --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU