Kimi K2 Instruct 0905 icon

Kimi K2 Instruct 0905

NVIDIA
Kimi K2 Instruct 0905 is an instruction-tuned Mixture-of-Experts (MoE) large language model developed, designed for advanced coding, agentic workflows, and long-horizon reasoning tasks. The model features a 1T parameter MoE architecture with 32B activated parameters, 61 transformer layers, 64 attention heads, and 384 experts (8 experts per token). It supports a 256K token context window and uses Multi-head Latent Attention (MLA) for efficient long-context processing.
Type MoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date05 September, 2025
Links
LicenseModified MIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.8-cu130 
 python3 -m sglang.launch_server 
 --model-path moonshotai/Kimi-K2-Instruct-0905 
 --host 0.0.0.0 
 --port 8000 
 --max-running-requests 1024 
 --tp 8 
 --tool-call-parser kimi_k2 
 --mem-fraction-static 0.90 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy Kimi K2 Instruct 0905 on NVIDIA GPUs | Vultr Docs