Sarvam 30b icon

Sarvam 30b

NVIDIA
Sarvam-30b is a Mixture-of-Experts (MoE) language model designed for efficient deployment and multilingual use. It uses a 19-layer transformer with 4,096 hidden size, 64 attention heads, and grouped KV heads (4 KV heads). The model includes 128 experts with top-6 routing and a shared expert, along with a dense FFN size of 8,192 and MoE intermediate size of 1,024. It supports up to 128K context with a high rope theta for stability. Built for Indian languages, it delivers strong reasoning, coding, and conversational performance in resource-constrained environments.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Group Release DateMarch 5, 2026
Links
LicenseApache 2.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e TORCH_FLOAT32_MATMUL_PRECISION=high 
 vllm/vllm-openai:v0.18.0 
 sarvamai/sarvam-30b 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
 --gpu-memory-utilization 0.95 
 --max-num-seqs 1024 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy Sarvam 30b on NVIDIA GPUs | Vultr Docs