Gemma 4 26B A4B IT icon

Gemma 4 26B A4B IT

NVIDIA
Gemma 4 26B A4B IT is a multimodal Mixture-of-Experts transformer model optimized for efficient reasoning, coding, and agentic workloads. It has 25.2B total parameters with 3.8B active, using 30 layers, 2,816 hidden size, and 16 attention heads. The architecture uses top-8 routing across 128 experts with a shared expert for efficiency. It supports a 256K context window with hybrid attention combining a 1024 token sliding window and global layers, and handles text, image, and video inputs.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Release Date02 April, 2026
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:gemma4-cu130 
 google/gemma-4-26B-A4B-it 
  --tensor-parallel-size 8 
 --max-model-len auto 
 --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --max-num-seqs 1024 
 --enable-auto-tool-choice 
 --reasoning-parser gemma4 
 --tool-call-parser gemma4 
 --trust-remote-code
Note

Use the vllm/vllm-openai:gemma4 image (or later) for CUDA 12.9 compatibility.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU