How to Monitor Vultr Serverless Inference

Updated on September 23, 2024

Monitoring the Vultr Serverless Inference subscription is essential for maintaining the performance and cost-efficiency of your AI deployments. By tracking the usage of various AI workloads, such as "Prompt, Chat, Text-to-Speech" and "RAG Chat Completion," you can gain valuable insights into resource consumption, optimize performance, and prevent potential bottlenecks. This proactive monitoring ensures that your AI applications run smoothly, delivering consistent and reliable results while keeping operational costs under control.

Follow this guide to monitor the usage of Serverless Inference on your Vultr account using the Vultr Customer Portal, API, or CLI.

  • Vultr Customer Portal
  • Vultr API
  • Vultr CLI
  1. Navigate to Products, click Serverless, and then click Inference.

  2. Click your target inference subscription to open its management page.

  3. Open the Usage page.

  4. View the usage statistics for all inference endpoints.

  1. Send a GET request to the List Serverless Inference endpoint and note the target inference subscription's ID.

    console
    $ curl "https://api.vultr.com/v2/inference" \
        -X GET \
        -H "Authorization: Bearer ${VULTR_API_KEY}"
    
  2. Send a GET request to the Serverless Inference Usage Information endpoint to retrieve usage details about all inference endpoints.

    console
    $ curl "https://api.vultr.com/v2/inference/{inference-id}/usage" \
        -X GET \
        -H "Authorization: Bearer ${VULTR_API_KEY}"
    
  1. List all the inference subscriptions available and note the target inference subscription's ID.

    console
    $ vultr-cli inference list
    
  2. Get the target inference subscription's usage.

    console
    $ vultr-cli inference usage get <inference-id>
    

Comments

No comments yet.