Llama 3.1 405B Instruct icon

Llama 3.1 405B Instruct

NVIDIA
Llama-3.1-405B-Instruct is an instruction-tuned dense transformer large language model designed for advanced reasoning, multilingual dialogue, and large-scale AI applications. Built on a 405B parameter architecture, it features 126 transformer layers, 128 attention heads, and a 16,384 hidden size, utilizing Grouped Query Attention (GQA) for efficient large-scale inference. The model supports up to a 128K token context window with RoPE scaling for long-context understanding. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Group Release DateJuly 22, 2024
Links
LicenseLlama3.1

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 meta-llama/Llama-3.1-405B-Instruct 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --quantization fp8 
 --kv-cache-dtype fp8 
 --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU