Gemma 4 31B IT icon

Gemma 4 31B IT

NVIDIA
Gemma 4 31B IT is a multimodal dense transformer model designed for high-performance reasoning, coding, and agentic workflows. It features 30.7B parameters with 60 layers, 5,376 hidden size, and 32 attention heads. The model uses hybrid attention with a 1024 token sliding window and global layers, supporting up to 256K context with proportional RoPE scaling. It processes text, image, and video inputs, and is optimized for strong multilingual, long-context, and multimodal understanding.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Release Date02 April, 2026
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:cu13-gemma4 
 python3 -m sglang.launch_server 
 --model-path google/gemma-4-31B-it 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --tool-call-parser gemma4 
 --reasoning-parser gemma4 
 --max-running-requests 1024 
 --tp 8 
 --mem-fraction-static 0.9 
 --trust-remote-code
Note

Use the lmsysorg/sglang:gemma4 image (or later) for CUDA 12.9 and set --mem-fraction-static to 0.9 or lower to prevent OOM errors.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU