Vultr DocsLatest Content

How to Build AI Workflows Using N8N on Vultr Cloud GPU

Updated on 29 October, 2025
Build and automate AI-powered workflows using n8n and Ollama on a Vultr Cloud GPU server with seamless API integrations.
How to Build AI Workflows Using N8N on Vultr Cloud GPU header image

n8n is an open-source workflow automation platform that enables you to build and automate workflows by connecting nodes to represent various services and actions. n8n combines visual building with custom code to create and integrate multi-step AI agents into existing applications. n8n offers flexible and developer-friendly automation features, including the following:

  • Event Driven Execution: n8n uses nodes to execute workflows based on specific events like incoming web hooks, enabling you to automate real-time application and system changes.
  • Visual Workflow Builder: n8n offers a web-based workflow management interface that lets you build workflows by connecting nodes. Each node represents an action, trigger, or service integration.
  • Extensive Integration Library: n8n supports over 350 built-in integration services, such as GitHub, Google Workspace, Slack, and Discord, to connect workflows with any service via API.
  • Native JavaScript Support: n8n supports custom JavaScript code, enabling you to create advanced workflows from code.

In this guide, you will build AI-powered workflows using n8n on a Vultr Cloud GPU server.

Prerequisites

Before you begin, you need to:

Access and Setup n8n

n8n requires a valid user account to access the main workflow dashboard. Follow the steps below to access n8n and customize the workflow management dashboard on your server.

  1. Access the n8n setup page using a web browser like Chrome.

    http://n8n.example.com
  2. Enter your active email address, first name, last name, and a strong password in the respective fields.

    Setup User Information

  3. Click Next to save the owner account information.

  4. Set your n8n company information and click Get Started to save it.

    Setup Company Information

  5. Enter your active email address to generate a free license key to use with n8n.

  6. Access your mailbox and copy the license key.

    n8n License Key Information

  7. Click your username in the bottom left corner, and select Settings from the list.

  8. Click Enter activation key and paste the license key to unlock advanced n8n features.

  9. Click Activate to apply the license key.

    Activate n8n License

Create Basic Workflows Using n8n

Follow the steps below to access n8n and create a basic HTTP workflow using n8n.

  1. Navigate to Overview within the n8n Workflow management interface.

    Create Workflow

  2. Click Create Workflow button on the top right corner of the page.

  3. Click Add first step.

  4. Enter webhook in the search field and press Enter to browse the available triggers.

  5. Select Webhook from the list of options to open its configuration dialog.

    Add Webhook node

  6. Click the HTTP Method drop-down and select GET from the list of options.

  7. Remove the default value in the path field and set /greetings as the new path.

    Configure Webhook Node

  8. Click the Respond drop-down and select Using 'Respond to Webhook' Node from the list of options.

  9. Click Back to Canvas to save the node configuration.

  10. Click + to add a new node to your canvas.

  11. Search and select Respond to Webhook from the list of options.

    Add Respond Node

  12. Click the Respond With drop-down and select JSON from the list of options.

  13. Replace the default Response Body configuration with the following contents.

    json
    {
      "message": "Greetings from Vultr! The workflow is successfully executed",
      "status": "success",
      "timestamp": "{{ $now }}"
    }
    
  14. Click Back to Canvas to save the response configuration.

  15. Toggle the Inactive on the top navigation bar to activate the workflow.

  16. Click Got it and verify that the workflow is active.

    Activate Workflow

  17. Click Execute Workflow to start the workflow.

  18. Double-click the Webhook node to open its configuration page.

    Copy Webhook URL

  19. Copy the node URL from the Webhook URLs to execute with a curl request.

  20. Click Back to canvas to access the workflow.

  21. Open a new terminal on your workstation.

  22. Send a new GET request to the Webhook URL.

    console
    $ curl https://n8n.example.com/webhook-test/greetings
    

    Verify that the request is accepted with a Greetings from Vultr response in your output, similar to the one below.

    {"message":"Greetings from Vultr! The workflow is successfully executed","status":"success","timestamp":"2025-09-13T20:57:56.959-04:00"}
  23. Verify that your workflow executes within the n8n interface and monitor the node output within the Logs pane.

    Verify Workflow Execution

  24. Click Save on the top navigation bar to keep all changes in your workflow.

Create AI Workflows Using n8n

AI agents are autonomous software systems or applications that use large language models (LLMs) to understand goals, plan, and execute tasks automatically, with minimal human intervention. In n8n, they require API credentials to connect to specific model providers such as OpenAI, Claude, Mistral, and store the executed tasks in memory for reference and use in your workflow. You can integrate AI agents with popular services to perform tasks like searching the web, sending reminders, sending Slack, Discord, Telegram, or WhatsApp messages, and managing calendar events.

Follow the steps below to install Ollama on your server, download and run self-hosted models, and create automated AI Agent Workflows to perform basic tasks in your workflow.

Install Ollama

Ollama is an open-source tool for running large language models (LLMs) locally, without relying on cloud-based models. It supports OpenAI-compatible endpoints, making it compatible with the AI workflows in n8n with support for tools and functions depending on the downloaded models. Follow the steps below to install Ollama using Docker and download models like gpt-oss to use with AI workflows in n8n.

  1. Check the GPU usage information on your server to enable fast model execution with Ollama.

    console
    $ nvidia-smi
    
  2. Run Ollama using Docker with GPU acceleration.

    console
    $ sudo docker run -d --gpus=all -v ollama:/app/.ollama -p 11434:11434 --name ollama ollama/ollama
    
  3. Verify the installed Ollama version.

    console
    $ sudo docker exec -it ollama ollama --version
    
  4. Visit the Ollama models repository and note the large language models (LLMs) to download on your server.

    Browse Ollama Models

  5. Download a model such as gpt-oss:20b using Ollama to use with n8n.

    console
    $ sudo docker exec -it ollama ollama pull gpt-oss:20b
    
  6. List all downloaded models.

    console
    $ sudo docker exec -it ollama ollama list
    

    Output:

    NAME           ID              SIZE     MODIFIED      
    gpt-oss:20b    aa4295ac10c3    13 GB    2 minutes ago   
  7. Allow connections to the 11434 Ollama port through the firewall to enable API connections using your domain or the server IP address.

    console
    $ sudo ufw allow 11434/tcp
    
  8. Reload UFW to apply the firewall configuration changes.

    console
    $ sudo ufw reload
    

Create AI Agents in n8n

Follow the steps below to create a new AI Agent that executes basic tasks in your workflow using Ollama.

  1. Click Overview within the n8n workflow management interface.

  2. Click Create New Workflow.

  3. Click + and enter chat in the search bar.

    Search Chat Trigger

  4. Select Chat Trigger from the list of triggers to open its configuration pane.

  5. Click Back to canvas to use the trigger without any additional configurations.

  6. Click + next to the node, search and select AI Agent from the list of trigger options.

    Add AI Agent Node

  7. Verify that the Source for Prompt is set to Connect Chat Trigger Node, and click Back to canvas.

  8. Click + on the Chat Model input.

  9. Search for ollama and select Ollama Chat Model from the list of model options.

    Add Ollama to AI Agent

  10. Click the Select Credential drop-down and select Create new credential.

  11. Replace the Base URL with your server's IP address and Ollama port.

    Set Up Ollama Credential

  12. Click Save to test the Ollama connection and close the configuration dialog.

  13. Click the Model drop-down and select the base model to use with the AI agent in your workflow.

    Select the Ollama Chat Model

  14. Click Back to canvas to switch to your workflow.

  15. Click the + on the memory input and select Simple Memory from the list of options to use the n8n memory.

    Add Agent Memory

  16. Click Back to canvas to use the memory with the default configuration.

  17. Click + on the Tool input, search and select Calculator from the list of available tools.

    Add Agent Tool

  18. Click Back to canvas and click Save to keep the workflow changes.

  19. Toggle Inactive on the top navigation bar to activate the workflow.

    Activate the AI Workflow

  20. Click Open chat to open the bottom chat interface.

  21. Enter a basic calculation prompt within the Chat pane. For example, What is 3345 multiplied by 17, and divided by 5?.

  22. Press Enter to execute the workflow and monitor the AI agent execution.

    Monitor the AI Agent Workflow

  23. Verify that the AI agent stores the prompt in memory and uses Ollama to process the output in your workflow.

    View the AI Agent output

    You have executed an AI workflow in n8n using Ollama and stored all executed prompts in memory with the AI agent node. You can add new actions to the AI agent to process results from the AI agent using additional services like Google Drive, Google Sheets, Telegram, among others.

Conclusion

In this guide, you installed and configured n8n to build automated AI workflows using AI agents on a Vultr Cloud GPU instance. You can use n8n to build advanced AI agents and integrate existing services via API to perform multiple automated tasks in your workflow. For more information and workflow templates, visit the n8n documentation.

Comments