| Type | Dense LLM |
| Capabilities | Text Generation, Instruction Following, Reasoning, Mathematical Reasoning+4 more |
| Links | |
| License | Gemma |
Inference Instructions
Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.
CONSOLE
docker run -it --rm --runtime=nvidia --gpus all --ipc=host --shm-size=64g -p 8000:8000 -v ~/.cache/huggingface:/root/.cache/huggingface -e HF_TOKEN='YOUR_HF_TOKEN' -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' vllm/vllm-openai:v0.15.0-cu130 google/gemma-3-270m-it --tensor-parallel-size 4 --max-model-len auto --max-num-batched-tokens 65536 --gpu-memory-utilization 0.95 --block-size 32 --max-num-seqs 1024 --disable-log-requests --trust-remote-code
Note
Ensure tp is set to 1, 2, or 4 to match attention head divisibility and force --block-size 32 to bypass the known FlashInfer bug associated with the model's 256 head dimension.
Model Benchmarks
Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.
ITL vs Concurrency
Time to First Token
Throughput Scaling
Total Tokens/sec vs Avg TTFT
Vultr Cloud GPU
NVIDIA HGX B200
Deploy NVIDIA B200 on Vultr Cloud GPU
