Tiny Aya Base icon

Tiny Aya Base

NVIDIA
Tiny Aya Base is a 3.35B-parameter multilingual language model designed for efficient deployment and broad language coverage. It uses a 36-layer transformer with 2,048 hidden size, 16 attention heads, and grouped KV heads (4 KV heads). The architecture alternates sliding window attention (4,096 window) with periodic global attention layers for full-sequence interaction. It supports around 8K context in practice and uses RoPE for positional encoding. Trained across 70+ languages, it is optimized for balanced multilingual performance, downstream adaptation, and low-resource environments.
TypeDense LLM
CapabilitiesText Generation, Instruction Following, Text Classification, Multilingual
Release Date17 February, 2026
Links
LicenseCC-BY-NC-4.0

Inference Instructions

Deploy and run this model on NVIDIA B200 GPUs using the command below. Copy the command to get started with inference.

CONSOLE
docker run -it --rm 
 --runtime=nvidia 
 --gpus all 
 --ipc=host 
 --shm-size=128g 
 -p 8000:8000 
 -v ~/.cache/huggingface:/root/.cache/huggingface 
 -e HF_TOKEN='YOUR_HF_TOKEN' 
 vllm/vllm-openai:v0.18.0 
 CohereLabs/tiny-aya-base 
  --tensor-parallel-size 8 
 --max-model-len auto 
  --max-num-batched-tokens 65536 
 --gpu-memory-utilization 0.95 
 --chat-template ./examples/template_chatml.jinja 
 --max-num-seqs 1024 
 --trust-remote-code
Note

Serving CohereLabs/tiny-aya-base requires gated model access via Hugging Face and a custom ChatML template to resolve OpenAI-compatible request failures.

Model Benchmarks

Each model was tested with a fixed input size and total token volume while increasing concurrency to measure serving performance under load.

ITL vs Concurrency

Time to First Token

Throughput Scaling

Total Tokens/sec vs Avg TTFT

Vultr Cloud GPU

NVIDIA HGX B200

Deploy NVIDIA B200 on Vultr Cloud GPU