How to Set Up Load Balancing using Nginx
Introduction
Nginx is a free, open-source, high-performance server for web serving, reverse proxying, caching, load balancing, media streaming, and more. Its event-based, asynchronous architecture has made it one of the most popular and best-performing web servers available.
Nginx can be configured as a load balancer to distribute incoming traffic and requests among a group of application instances. Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations. It is an excellent way to scale a web application and increase its performance and redundancy.
Advantages of Load Balancing
- Enhanced Performance - spread workload among a group of servers.
- Increased Scalability - add new servers to cluster for scaling.
- Fault Tolerance - stays running even if one of your servers fails.
Load Balancing Algorithms
A load-balancing algorithm is a logic that a load balancer uses to distribute network traffic between servers. Nginx provides 3 algorithms for balancing the load between a group of servers.
Round Robin (Default) - Round Robin is the default load‑balancing technique for Nginx. The load balancer runs through the list of upstream servers in sequence, assigning the next connection request to each one in turn.
Least Connections - With the Least Connections method, the load balancer compares the current number of active connections it has to each server and sends the request to the server with the fewest connections. You configure it with the
least_conn
directive.IP Hash - IP Hash is a predefined variant of the Hash method, in which the hash is based on the client's IP address. You set it with the
ip_hash
directive.
Understanding Upstream Block
Upstream block in Nginx is used to define a group of servers running our application. You can also mention the traffic distribution method inside it.
Servers are listed inside the upstream block using the server
directive and can be identified using IP address, hostname, or UNIX socket path.
upstream <upstream_name> {
server 10.0.0.1;
server 10.0.0.2;
...
}
Servers identified by Hostname.
upstream <upstream_name> {
server server1.example.com;
server server2.example.com;
...
}
Servers identified by UNIX Socket Path.
upstream <upstream_name> {
server unix:/tmp/worker1.sock;
server unix:/tmp/worker2.sock;
...
}
weight
parameter in upstream server directive can be defined to configure weighted load balancing.
upstream <upstream_name> {
server 10.0.0.1 weight=3;
server 10.0.0.2;
...
}
Following directives can be used to define traffic distribution method in an upstream block:
- least_conn - Least Connections.
- ip_hash - IP Hash.
Example:
upstream <upstream_name> {
least_conn;
server 10.0.0.1;
server 10.0.0.2;
...
}
If you don't define a method it will default to round-robin algorithm, which is used to distribute traffic among servers sequentially.
Understanding Server Block
Server block in Nginx is used to define a virtual server that listens for traffic with the characteristics that are defined in it.
In our example, the given server block listens for incoming traffic on domain example.com and proxies it to an upstream named backend.
server {
server_name example.com;
location / {
proxy_pass http://backend;
}
}
location directive is used to define settings/rules for a folder/directory on your server. Directives that can be used inside a location directive to route traffic to an upstream are as follows:
- proxy_pass
- fastcgi_pass
- uwsgi_pass
- scgi_pass
- memcached_pass
- grpc_pass
Configuring Nginx as a Load Balancer
Install Nginx on Ubuntu or Debian
# apt update
# apt install nginx
Install Nginx on CentOS
# yum install epel-release
# yum update
# yum install nginx
Setting up Nginx as Load Balancer
Assuming your web application is running on 3 servers with IP addresses in the range 10.0.0.1
to 10.0.0.3
and the method desired is round-robin, then our upstream block should look like the following.
upstream backend {
server 10.0.0.1;
server 10.0.0.2;
server 10.0.0.3;
}
Here we have used the name backend to define the upstream, which we can use to pass the requests to our cluster. To pass the requests to our server block should look like the following.
server {
server_name your_domain;
location / {
proxy_pass http://backend;
}
}
Create a new vhost.
# nano /etc/nginx/sites-available/cluster
Paste the following content and save the file using Ctrl + X then Enter
upstream backend {
server 10.0.0.1;
server 10.0.0.2;
server 10.0.0.3;
}
server {
server_name your_domain;
location / {
proxy_pass http://backend;
}
}
Add a soft link of the vhost file in sites-enabled
directory.
# ln -s /etc/nginx/sites-available/cluster /etc/nginx/sites-enabled/cluster
Test the configuration.
# nginx -t
Reload Nginx service.
# systemctl reload nginx
By following these steps successfully, your cluster now should be running and accessible on the following link.
http://your_domain/
Make sure
your_domain
points to the server in which we configured our load balancer using an A record.
For demonstration purposes, we configured 3 servers with an index.html
containing H1 tag with the name of the machine to identify the selected server. As mentioned in the steps given above, we used the round-robin method, so it used servers listed in our upstream sequentially.
Securing Load Balancer with an SSL Certificate
We will use Let's Encrypt to obtain an SSL Certificate for free. Please make sure you have pointed your domain to the server's IP address.
Install Certbot package.
# apt install certbot python3-certbot-nginx
Install SSL on your domain.
# certbot --nginx -d your_domain
You can verify if the SSL Certificate is configured properly or not by opening the following link in your web browser.
https://your_domain/
Let's Encrypt certificates are only valid for 90 days, but since we are using certbot, it will handle auto-renewals for us.
Verify if the auto-renewal works.
# certbot renew --dry-run
If the above command doesn't throw an error, it means your SSL certificate will be renewed automatically without any issues.
Conclusion
In this article, you understood the basics of load balancing and learned how to configure a load balancer using Nginx and secure it using an SSL Certificate.