Llama 3.1 8B Instruct icon

Llama 3.1 8B Instruct

NVIDIA
Llama 3.1 8B Instruct is an instruction-tuned dense transformer large language model optimized for multilingual conversational AI and assistant-style applications. Built on an 8B parameter architecture, it features 32 transformer layers, 32 attention heads, and a 4,096 hidden size, leveraging Grouped Query Attention (GQA) for efficient inference. The model supports up to a 128K token context window with advanced RoPE scaling for long-context processing. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date23 July, 2024
Links
LicenseLlama3.1

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 meta-llama/Llama-3.1-8B-Instruct 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
  --gpu-memory-utilization 0.95 
  --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU