Monitoring the Vultr Serverless Inference service is essential for maintaining the performance and cost-efficiency of your AI deployments. By tracking the usage of various AI workloads, such as "Prompt, Chat, & Embeddings" and "Text-to-Speech," you can gain valuable insights into resource consumption, optimize performance, and prevent potential bottlenecks. This proactive monitoring ensures that your AI applications run smoothly, delivering consistent and reliable results while keeping operational costs under control.
Follow this guide to monitor the usage of Serverless Inference on your Vultr account using the Vultr Customer Portal, API, or CLI.
Navigate to Products, click Serverless, and then click Inference.
Click your target inference service to open its management page.
Open the Usage page.
View the usage statistics for all inference endpoints.
Send a GET
request to the List Inference endpoint and note the target inference service's ID.
$ curl "https://api.vultr.com/v2/inference" \
-X GET \
-H "Authorization: Bearer ${VULTR_API_KEY}"
Send a GET
request to the Inference Usage endpoint.
$ curl "https://api.vultr.com/v2/inference/<inference-id>/usage" \
-X GET \
-H "Authorization: Bearer ${VULTR_API_KEY}"
List all the inference services available and note the target inference service's ID.
$ vultr-cli inference list
Get the target inference service's usage.
$ vultr-cli inference usage get <inference-id>