Load balancing is the process of distributing load or tasks over a set of resources for making overall processes more efficient. It is a commonly used mechanism for distributing incoming network traffic to a group of servers. Nowadays many companies are moving away from hardware load balancers and to NGINX to deliver their applications.
Configuring NGINX Load balancing In Ubuntu:
Before going into load balancing configuration first we need to install Nginx in the system, for that follow the below-mentioned steps:
Install NGINX:
$ sudo apt install nginx
If Nginx is already installed in your system then skip this step.
Create a server file:
Create a file for configuring at the location /etc/nginx/conf.d/filename.conf
You can set the configuration file name as you wish.
$ sudo vi /etc/nginx/cong.d/filename.conf
Then add the following configurational code to the above-mentioned file:
http {
upstream backend {
Server example1.com;
Server example2.com;
server 192.0.0.1 backup;
}
server {
location / {
proxy_pass http://backend;
}
}
}
In the above configuration file, the proxy_pass directive is used to pass a request to HTTP proxy servers. Furthermore, the upstream directive is used to refer to a group of servers.
To use the load balancing method change the upstream backend part in the configuration as mentioned below:
Choosing load balancing method:
1) ROUND ROBIN
Requests are distributed in a round-robin manner, this method is used by default.
upstream backend {
# for round-robin no load balancing method is specified
Server example1.com;
Server example2.com;
}
2) LEAST CONNECTIONS
Another Load balancer discipline is Least Connections where the request is sent to the server with the least number of active connections, again with server weights taken into consideration.
upstream backend {
least_conn;
Server example1.com;
Server example2.com;
}
3) IP HASH
Hash function is used to determine which server should be selected for the next request based on the IP address of the client.
upstream backend {
ip_hash;
Server example1.com;
Server example2.com;
}
4) GENERIC HASH
The server should determine from a user-defined key which can be a variable, text, or both.
upstream backend {
hash $request_uri consistent;
Server example1.com;
Server example2.com;
}
In the above example, example1.com has weight 5; the other servers have the default weight (1), The server with IP address 192.0.0.1 is a backup server. And it does not receive requests until other servers are unavailable. In this configuration out of every 6 requests, 5 are sent to example1.com and 1 to example2.com.
After Setting up the configuration save and exit and further restart and enable Nginx as done in the following code:
$ sudo systemctl restart nginx
$ sudo systemctl enable nginx
SERVER WEIGHTS:
In Nginx it distributes the requests to the servers according to the weights, by default server weight is 1. Moreover, the weight can be set in the configuration file as shown below:
upstream backend {
server example1.com weight=5;
server example2.com;
server 192.0.0.1 backup;
}
Server-Slow-Start:
It is a feature that prevents a recently recovered server from being overwhelmed by connections and may time out and cause the server to be marked as failed again.
upstream backend {
Server example1.com slow_start=30s;
server example2.com;
server 192.0.0.1 backup;
}
Slow-start allows the upstream server to recover its weight from 0 to its nominal value after it has been recovered or became available. This is done by adding the slow_start parameter to the directive.
Here, it takes the 30s for which the NGINX Plus ramps up the number of connections to the server to the full value.