Frequently asked questions and answers about Vultrs products, services, and platform features.
These are the frequently asked questions for Vultr Serverless Inference.
Vultr Serverless Inference supports a growing catalog of production-ready models across multiple categories, including large language models, chat-optimized models, code generation models, text-to-speech models, and image generation models. The available model list is regularly updated as new models are added. To view the current supported models, navigate to the Serverless Inference section in the Vultr Console and check the model selector in the Prompt tab.
You can monitor your usage and costs by navigating to the "Usage" tab of your Vultr Serverless Inference subscription in the Vultr Console. Here, you will find details on your current token usage, overage, and any associated costs. You can also view your API key and other subscription details in the "Overview" tab.
Yes, you can integrate Vultr Serverless Inference with your existing machine learning pipeline. To do this, replace your current inference API URL (such as OpenAI's base API URL) with Vultr’s API URL. Then, use your Vultr API key for authentication to seamlessly incorporate Vultr Serverless Inference into your workflow.
You can regenerate your Vultr Serverless Inference API key from the Overview page in the Vultr Console. This will invalidate the previous API key and generate a new one for enhanced security.
The quality of the outputs from Vultr Serverless Inference depends on the machine learning model you are using. If the outputs are not meeting your expectations, consider trying a different model or refining your prompts. Vultr provides the infrastructure, but the model's performance is a key factor in the output quality.
Yes, you can test inference workloads by using the "Prompt" tab in the Vultr Serverless Inference section of the Vultr Console. This allows you to input prompts, select a model, and adjust settings such as max tokens and temperature to see how the model responds before running larger workloads.
Vultr takes data security seriously. All data transmitted to and from Vultr Serverless Inference is encrypted, and the subscription is designed with security best practices to ensure that your data and workloads are protected.