Phi 4 Mini Reasoning icon

Phi 4 Mini Reasoning

NVIDIA
Phi-4-mini-reasoning is a lightweight dense transformer model optimized for multi-step mathematical and logical reasoning with agentic tool-calling capabilities in constrained environments. It features 3.8B parameters with 32 layers, 3,072 hidden size, and 24 attention heads using grouped-query attention with 8 key-value heads. The model supports a 128K token context window with long RoPE scaling and a sliding window of 262K tokens, along with a 200K vocabulary. Trained on ~150B tokens, it delivers efficient, low-latency structured problem solving.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Reasoning, Mathematical Reasoning+5 more
Group Release DateDecember 11, 2024
Links
LicenseMIT

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run --gpus all 
 --shm-size 128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 --ipc=host 
 lmsysorg/sglang:v0.5.9 
 python3 -m sglang.launch_server 
 --model-path microsoft/Phi-4-mini-reasoning 
 --host 0.0.0.0 
 --port 8000 
 --max-running-requests 1024 
 --max-prefill-tokens 65536 
 --tp 8 
 --enable-piecewise-cuda-graph 
 --mem-fraction-static 0.95 
 --trust-remote-code

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU

How to Deploy Phi 4 Mini Reasoning on NVIDIA GPUs | Vultr Docs