Troubleshooting guide for connectivity issues between Vultr Load Balancers and backend instances, covering common network configuration problems and instance health issues.
A failure to reach an instance behind a Vultr Load Balancer usually indicates a breakdown in the connection between the load balancer and the instance, or an issue on the instance itself. This can result from misconfigured networking, unhealthy instances, firewall restrictions, or application/service failures.
Confirm that the application or service is running and listening on the expected ports:
$ sudo ss -plunt
Verify that the correct services are bound to the appropriate interfaces (often 0.0.0.0
or the private VPC IP) and ports.
Confirm that Vultr Firewalls or VPC rules allow inbound traffic from the load balancer to the instances on the required ports. ICMP traffic should also be allowed for diagnostic tests if needed.
Test connectivity from your local machine to both the load balancer and the instance:
$ curl -v http://<load-balancer-ip>
$ telnet <instance-ip> <port>
If successful, curl -v http://<load-balancer-ip>
returns an HTTP response with headers from your application, such as HTTP/1.1 200 OK
or another valid status like 301
, 302
, or 403
, confirming that the load balancer is forwarding traffic correctly. Similarly, telnet <instance-ip> <port>
establishes a TCP connection and displays Connected to <instance-ip>
, which indicates that the service on that port is listening and reachable.
If unsuccessful, curl -v
may show Connection refused
if the service is not listening or blocked by a firewall, Connection timed out
if there is no response due to firewall rules, VPC misconfiguration, or the instance being offline, or Could not resolve host
if there is a DNS misconfiguration when using a hostname. In the same way, telnet
may display Unable to connect to remote host: Connection refused
when the port is closed or blocked, or it may hang until timeout if no network path exists between your machine and the instance or load balancer.
When connecting via a custom hostnames, verify that the A record points to the load balancer’s public IP. Use:
dig <hostname> A
Ensure the ANSWER SECTION contains the load balancer IP. Keep in mind DNS propagation can take up to 24 hours.
By systematically checking instance health, service availability, networking rules, and load balancer routing, you can identify the root cause of connectivity failures and restore normal traffic flow.