Llama 4 Maverick 17B 128E icon

Llama 4 Maverick 17B 128E

NVIDIA
Llama-4-Maverick-17B-128E is a natively multimodal mixture-of-experts (MoE) large language model engineered for high-performance text and image understanding. It activates 17B parameters (400B total) across 128 experts and is built on a 48-layer transformer architecture with 40 attention heads and a 5,120 hidden size. The model supports multilingual text and image inputs and produces multilingual text and code outputs. With up to a 1M token context window, it is optimized for scalable multimodal AI systems, large-document reasoning, and research-grade long-context workflows.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Links
LicenseLlama4

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path meta-llama/Llama-4-Maverick-17B-128E 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --max-running-requests 1024 
 --tp 8 
 --mem-fraction-static 0.95 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU