How to Use the Prompt Endpoint in Vultr Serverless Inference

Updated on September 23, 2024

Vultr Serverless Inference prompt endpoint allows users to send a single prompt to an AI model for generating responses. This service supports interactive and dynamic AI interactions, enabling users to obtain specific outputs based on their prompts and integrate these responses into their applications effectively. By using this feature, you can enhance your application's responsiveness and adaptability to user inputs, providing more personalized and relevant outputs.

Follow this guide to utilize the prompt endpoint on your Vultr account using the Vultr Customer Portal.

  1. Navigate to Products, click Serverless, and then click Inference.

  2. Click your target inference subscription to open its management page.

  3. Open the Prompt page.

  4. Select a preferred model.

  5. Provide values for Max Tokens, Seed, Temperature, Top-k and Top-p.

  6. Provide a prompt and click on Prompt.

  7. Click Reset to provide a new prompt.

Comments

No comments yet.