Vultr DocsLatest Content

Associated Doc

Why Can't I Reach an Attached Instance Through My Vultr Load Balancer?

Updated on 15 September, 2025

Troubleshooting guide for connectivity issues between Vultr Load Balancers and backend instances, covering common network configuration problems and instance health issues.


A failure to reach an instance behind a Vultr Load Balancer usually indicates a breakdown in the connection between the load balancer and the instance, or an issue on the instance itself. This can result from misconfigured networking, unhealthy instances, firewall restrictions, or application/service failures.

Preliminary Checks

  • Ensure all instances attached to the load balancer are in a Running state. Stopped or suspended instances will be automatically removed from the load balancer’s routing pool.
  • In the Vultr Customer Portal, navigate to your Load Balancer and go to Resources → Instances. Verify that each attached instance appears in the list with a passing health check. Instances that fail health checks are automatically excluded from routing and will not receive traffic, which can make them appear unreachable externally. Make sure the health check protocol, port, and path are correctly configured to match your application’s expected response.
  • If network access to the instance fails, verify root credentials and use the Vultr Console to access the instance directly. Verify system logs, services, and network configuration without relying on the load balancer.

Connectivity Validation

  • Confirm that the application or service is running and listening on the expected ports:

    console
    $ sudo ss -plunt
    

    Verify that the correct services are bound to the appropriate interfaces (often 0.0.0.0 or the private VPC IP) and ports.

  • Confirm that Vultr Firewalls or VPC rules allow inbound traffic from the load balancer to the instances on the required ports. ICMP traffic should also be allowed for diagnostic tests if needed.

  • Test connectivity from your local machine to both the load balancer and the instance:

    console
    $ curl -v http://<load-balancer-ip>
    $ telnet <instance-ip> <port>
    

    If successful, curl -v http://<load-balancer-ip> returns an HTTP response with headers from your application, such as HTTP/1.1 200 OK or another valid status like 301, 302, or 403, confirming that the load balancer is forwarding traffic correctly. Similarly, telnet <instance-ip> <port> establishes a TCP connection and displays Connected to <instance-ip>, which indicates that the service on that port is listening and reachable.

    If unsuccessful, curl -v may show Connection refused if the service is not listening or blocked by a firewall, Connection timed out if there is no response due to firewall rules, VPC misconfiguration, or the instance being offline, or Could not resolve host if there is a DNS misconfiguration when using a hostname. In the same way, telnet may display Unable to connect to remote host: Connection refused when the port is closed or blocked, or it may hang until timeout if no network path exists between your machine and the instance or load balancer.

  • When connecting via a custom hostnames, verify that the A record points to the load balancer’s public IP. Use:

    console
    dig <hostname> A
    

    Ensure the ANSWER SECTION contains the load balancer IP. Keep in mind DNS propagation can take up to 24 hours.

Remediation Actions

By systematically checking instance health, service availability, networking rules, and load balancer routing, you can identify the root cause of connectivity failures and restore normal traffic flow.