LongCat Flash Chat icon

LongCat Flash Chat

NVIDIA
LongCat Flash Chat is a 562B-parameter Mixture-of-Experts (MoE) language model optimized for agentic tasks and high-throughput inference. It dynamically activates 18.6-31.3B parameters per token using zero-computation experts and a Shortcut-connected MoE (ScMoE) design. Built on a 28-layer transformer with 64 attention heads and 6,144 hidden size, it supports up to 128K token context. The model employs a multi-stage training pipeline with reasoning-focused pretraining, agentic post-training, and multi-agent task synthesis, enabling advanced reasoning, coding, and iterative interaction capabilities.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Release Date29 August, 2025
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.9 
 python3 -m sglang.launch_server 
 --model-path meituan-longcat/LongCat-Flash-Chat 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --max-running-requests 1024 
 --attention-backend flashinfer 
 --tp 8 
 --ep 8 
 --mem-fraction-static 0.90 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU