LongCat Flash Thinking icon

LongCat Flash Thinking

NVIDIA
LongCat-Flash-Thinking is a 560B-parameter MoE reasoning model with 512 experts, activating 18.6-31.3B parameters per token. It uses a 28-layer transformer with 6,144 hidden size, 64 attention heads, and 131K context length. The design includes zero-computation experts and MLA attention. Trained via a two-phase pipeline, Long CoT cold-start and large-scale RL on the DORA system, it emphasizes formal reasoning, theorem proving, and agentic tool use, with domain-parallel RL improving stability and cross-domain performance.
TypeMoE LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.9 
 python3 -m sglang.launch_server 
 --model-path meituan-longcat/LongCat-Flash-Thinking 
 --host 0.0.0.0 
 --port 8000 
 --max-prefill-tokens 65536 
 --max-running-requests 1024 
 --attention-backend flashinfer 
 --tp 8 
 --ep 8 
 --mem-fraction-static 0.90 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy LongCat Flash Thinking on NVIDIA GPUs | Vultr Docs