| Type | Omni Model |
| Capabilities | Text Generation, Instruction Following, Reasoning, Mathematical Reasoning+7 more |
| Release Date | 28 April, 2026 |
| Links | |
| License | NVIDIA Open Model Agreement |
Inference Instructions
Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.
CONSOLE
docker run -it --rm --runtime=nvidia --gpus all --ipc=host --shm-size=128g -p 8000:8000 -v ~/.cache/huggingface:/root/.cache/huggingface -v $(pwd)/super_v3_reasoning_parser.py:/plugins/super_v3_reasoning_parser.py -e HF_TOKEN='YOUR_HF_TOKEN' vllm/vllm-openai:v0.20.0 nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 --tensor-parallel-size 8 --max-model-len auto --max-num-batched-tokens 65536 --gpu-memory-utilization 0.96 --max-num-seqs 1024 --max-cudagraph-capture-size 512 --enable-auto-tool-choice --tool-call-parser qwen3_coder --reasoning-parser-plugin=/plugins/super_v3_reasoning_parser.py --reasoning-parser super_v3 --trust-remote-code
Note
Download reasoning parser before serving: wget "https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8/resolve/main/super_v3_reasoning_parser.py" and pass --reasoning-parser-plugin "/plugins/super_v3_reasoning_parser.py" --reasoning-parser super_v3
Model Benchmarks
Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.
ITL vs Concurrency
Time to First Token
Throughput Scaling
Total Tokens/sec vs Avg TTFT
Vultr Cloud GPU
NVIDIA HGX B200
Deploy NVIDIA B200 on Vultr Cloud GPU
