HTTP load Balancing using Nginx

Tasnuva Zaman
3 min readApr 16, 2019

--

Today we are going to practice HTTP load balancing using Nginx. You can go through the part-I of this article here: https://medium.com/@tasnuva2606/http-load-balancing-using-nginx-869ca9377fb9

Prerequisites:

  1. Nginx
  2. Virtualbox
  3. ubuntu servers (2 as web servers and one as a load balancer)
Load Balancing using Nginx

First step || Install Nginx

Run following command from your terminal to install nginx:

$ sudo apt-get install nginx

verify whether nginx is running or not:

$ sudo netstat -tplt | grep nginx

Second step || Change the content of ‘index.html’ of web server machine

Now change the content of /usr/share/nginx/html/index.html from your machine 1 which is being used as web server to web01.

Similarly change the content of /usr/share/nginx/html/index.html from your machine 2 to web02 .

Now we’ll be able to tell which back-end web server we are hitting. Test which web server you are hitting and they are responsing with the updated content by:

$ curl http://localhost:80 
# you can use the specific ip instead of localhost. ex: http://your_ip
# 80 is the default port of nginx

Third Step || configure load balancer

Install nginx to the machine that will be used as a load balancer

$ sudo apt-get install nginx

Now edit the default configuration file sudo nano /etc/nginx/sites-available/default and replace the content to:

upstream web_backend{   server 192.168.1.107;  #Ip address of virtual machine web01
server 192.168.1.108; # #Ip address of virtual machine web02
}server{
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web_backend;

}
}
Note: 1. 'X-Forwarded-For' contains the IP address of the client
that is making a request
2. By using 'proxy_pass' you are telling nginx to pass the request to the mentioned URL. In our case it is 'http://web_backend’.

Save the file.

Now restart the nginx server by running:

$ sudo systemctl restart nginx

Step Four || make request to load balancer

Now make request to load balancer:

curl http://localhost:80

Observation:

1st time web01 web server will be responded and 2nd time web02 web server will be responded.

as we didn’t mention any load balancing method the default round robin method is working now.

Step Five || test ‘ip_hash’ method

To apply ip_hash method just add the parameter ip_hash on upstream

upstream web_backend{
ip_hash;
server 192.168.1.107; #Ip address of virtual machine web01
server 192.168.1.108; # #Ip address of virtual machine web02
}server{
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web_backend;

}
}

restart the nginx server

$ sudo systemctl restart nginx

Now make request to load balancer:

$ curl http://localhost:80

Observation:

Request is being routed to the same web server. in our case it’s web01

Step Six||Weighted load balancing

Defining server weights allows to further fine-tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.

upstream web_backend{   server 192.168.1.107 weight=4;  
server 192.168.1.108 weight=2;
}server{
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://web_backend;

}
}

With this configuration, every 6 new requests will be distributed across the application instances as the following: 4 requests to web01 and 2 requests to web02 .

restart the nginx server

$ sudo systemctl restart nginx

Now make request to load balancer:

$ curl http://localhost:80

Similarly, you can try least_conn method by your own.

Congratulations!! We are successfully done with load balancing! Cheers!!

--

--

Tasnuva Zaman
Tasnuva Zaman

Responses (1)