Llama 3.1 405B icon

Llama 3.1 405B

NVIDIA
Llama 3.1 405B is a large-scale multilingual dense transformer language model designed for advanced reasoning, large-context processing, and enterprise AI workloads. The model features a 405B parameter architecture with 126 transformer layers, 128 attention heads, and a 16,384 hidden size, leveraging Grouped Query Attention (GQA) for efficient large-scale inference. It supports up to a 128K token context window with RoPE scaling for extended context understanding. Optimized for multilingual text and code generation, it is designed for research, large-scale AI systems, and high-performance LLM deployments.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date23 July, 2024
Links
LicenseLlama3.1

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path meta-llama/Llama-3.1-405B 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --max-running-requests 1024 
 --quantization fp8 
 --tp 8 
 --kv-cache-dtype fp8_e4m3 
 --mem-fraction-static 0.95 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU