Frequently asked questions and answers about Vultrs products, services, and platform features.
These are the frequently asked questions for Vultr Serverless Inference.
Currently, Vultr Serverless Inference supports a range of production-ready models across multiple categories. For language workloads, available models include Mistral-7B-v0.3, DeepSeek-R1, Llama-3.1-70B-Instruct-FP8, and Qwen2.5-32B-Instruct. Chat-optimized models include deepseek-r1-distill-qwen-32b, qwen2.5-coder-32b-instruct, deepseek-r1-distill-llama-70b, gpt-oss-120b, and kimi-k2-instruct. For speech generation and text-to-speech workloads, bark, bark-small, and xtts are supported. Image generation models include flux.1-dev, stable-diffusion-3.5-large, and stable-diffusion-3.5-medium. Support for additional model types may be added in the future.
You can monitor your usage and costs by navigating to the "Usage" tab of your Vultr Serverless Inference subscription in the Vultr Customer Portal. Here, you will find details on your current token usage, overage, and any associated costs. You can also view your API key and other subscription details in the "Overview" tab.
Yes, you can integrate Vultr Serverless Inference with your existing machine learning pipeline. To do this, replace your current inference API URL (such as OpenAI's base API URL) with Vultr’s API URL. Then, use your Vultr API key for authentication to seamlessly incorporate Vultr Serverless Inference into your workflow.
You can regenerate your Vultr Serverless Inference API key from the Overview page in the Vultr Customer Portal. This will invalidate the previous API key and generate a new one for enhanced security.
The quality of the outputs from Vultr Serverless Inference depends on the machine learning model you are using. If the outputs are not meeting your expectations, consider trying a different model or refining your prompts. Vultr provides the infrastructure, but the model's performance is a key factor in the output quality.
Yes, you can test inference workloads by using the "Prompt" tab in the Vultr Serverless Inference section of the Customer Portal. This allows you to input prompts, select a model, and adjust settings such as max tokens and temperature to see how the model responds before running larger workloads.
Vultr takes data security seriously. All data transmitted to and from Vultr Serverless Inference is encrypted, and the subscription is designed with security best practices to ensure that your data and workloads are protected.