| Type | Omni Model |
| Capabilities | Text Generation, Instruction Following, Reasoning, Mathematical Reasoning+7 more |
| Release Date | 02 April, 2026 |
| Links | |
| License | Apache 2.0 |
Inference Instructions
Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.
CONSOLE
docker run --gpus all --shm-size 128g -p 8000:8000 -v ~/.cache/huggingface:/root/.cache/huggingface -e HF_TOKEN='YOUR_HF_TOKEN' --ipc=host lmsysorg/sglang:cu13-gemma4 python3 -m sglang.launch_server --model-path google/gemma-4-E2B-it --host 0.0.0.0 --port 8000 --max-prefill-tokens 65536 --tool-call-parser gemma4 --reasoning-parser gemma4 --max-running-requests 1024 --tp 4 --mem-fraction-static 0.95 --trust-remote-code
Note
Use the lmsysorg/sglang:gemma4 image (or later) for CUDA 12.9.
Model Benchmarks
Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.
ITL vs Concurrency
Time to First Token
Throughput Scaling
Total Tokens/sec vs Avg TTFT
Vultr Cloud GPU
NVIDIA HGX B200
Deploy NVIDIA B200 on Vultr Cloud GPU
