Tiny Aya Earth icon

Tiny Aya Earth

NVIDIA
Tiny Aya Earth is a 3.35B-parameter multilingual language model optimized for West Asian and African languages. It uses a 36-layer transformer with 2,048 hidden size, 16 attention heads, and 4 KV heads. The architecture interleaves sliding window attention (4,096 window) with periodic global attention layers for efficient long-range interaction. It supports around 8K context length and uses RoPE for positional encoding. Trained across 70+ languages, it is designed for strong regional performance while remaining efficient for local deployment and downstream adaptation.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Text Classification, Multilingual
Release Date17 February, 2026
Links
LicenseCC-BY-NC-4.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:v0.18.0 
 CohereLabs/tiny-aya-earth 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
 --gpu-memory-utilization 0.95 
 --max-num-seqs 1024 
 --trust-remote-code
Note

Serving CohereLabs/tiny-aya-earth requires gated model access via Hugging Face.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU