Kimi K2.5 icon

Kimi K2.5

NVIDIA
Kimi-K2.5 is a native multimodal Mixture-of-Experts (MoE) large language model designed for advanced coding, vision reasoning, and autonomous agentic workflows. The model features a 1T parameter architecture with 32B activated parameters, 61 layers, 64 attention heads, and 384 experts (8 experts per token). It supports up to a 256K token context window and integrates a 400M-parameter MoonViT vision encoder for cross-modal understanding. The model employs native INT4 quantization with Quantization-Aware Training (QAT) to reduce inference latency and memory usage while maintaining strong performance.
TypeVision-Language Model
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+6 more
Group Release DateJanuary 26, 2026
Links
LicenseModified MIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 -e LD_LIBRARY_PATH='/usr/local/nvidia/lib64:/usr/local/nvidia/lib:/usr/lib/x86_64-linux-gnu' 
 vllm/vllm-openai:v0.15.0-cu130 
 moonshotai/Kimi-K2.5 
  --tensor-parallel-size 8 
 --mm-encoder-tp-mode data 
 --max-model-len auto 
  --gpu-memory-utilization 0.90 
 --tool-call-parser kimi_k2 
 --reasoning-parser kimi_k2 
 --enable-auto-tool-choice 
 --max-num-seqs 1024 
 --disable-log-requests 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU