Llama 3.1 70B Instruct icon

Llama 3.1 70B Instruct

NVIDIA
Llama 3.1 70B Instruct is an instruction-tuned dense transformer large language model optimized for multilingual dialogue, reasoning, and assistant-style AI applications. Built on a 70B parameter architecture, it features 80 transformer layers, 64 attention heads, and an 8,192 hidden size, leveraging Grouped Query Attention (GQA) for efficient large-scale inference. The model supports up to a 128K token context window with RoPE scaling for long-context understanding. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date23 July, 2024
Links
LicenseLlama3.1

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 meta-llama/Llama-3.1-70B-Instruct 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy Llama 3.1 70B Instruct on NVIDIA GPUs | Vultr Docs