Phi 4 Reasoning icon

Phi 4 Reasoning

NVIDIA
Phi-4-reasoning is a dense transformer model optimized for structured chain-of-thought reasoning across math, science, and coding tasks. It is fine-tuned from Phi-4 using high-quality reasoning traces to improve step-by-step problem solving. The model features 14B parameters with 40 layers, 5,120 hidden size, and 40 attention heads with 10 key-value heads. It supports a 32K token context window and is trained on curated synthetic and filtered data, delivering strong reasoning performance with efficient inference.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+2 more
Group Release DateDecember 11, 2024
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:v0.18.1 
 microsoft/Phi-4-reasoning 
  --tensor-parallel-size 2 
 --max-model-len auto 
 --gpu-memory-utilization 0.95 
 --max-num-batched-tokens 65536 
 --max-num-seqs 1024 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy Phi 4 Reasoning on NVIDIA GPUs | Vultr Docs