GPT OSS 20B icon

GPT OSS 20B

NVIDIA
Gpt-oss-20b is an open-weight mixture-of-experts (MoE) large language model designed for efficient reasoning, low-latency inference, and developer-focused AI applications. It contains 21B total parameters with 3.6B active parameters per token, utilizing 32 experts (4 experts per token) across a 24-layer transformer with 64 attention heads. The model supports up to a 131K token context window with YARN-based RoPE scaling and hybrid sliding/full attention layers for scalable long-context processing.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
ReleasedAugust 4, 2025
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e VLLM_USE_FLASHINFER_MOE_MXFP4_MXFP8=1 
 -e VLLM_USE_DEEP_GEMM=1 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 openai/gpt-oss-20b 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
 --dtype auto 
 --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy GPT OSS 20B on NVIDIA GPUs | Vultr Docs