Use NGINX as Load Balancer

Overview

Load balancing across multiple application instances are common approach for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations. There are many Load Balancers are in the market and few commercials are good in their job. NGINX and NGINX PLUS is equally efficient as Load Balancer. Many of us heard and used NGINX/NGINX PLUS as Reverse Proxy. But NGINX/NGINX PLUS can also use as Load Balancer. Today we will see how we can use NGINX/INGINX PLUS as Load Balancer. We will also configure basic routing technique what we have in conventional Load Balancer.



Configuration Files

NGINX/NGINX PLUS uses simple text-based configuration file, written in a specific format. By default, the file name is nginx.conf and the file is generally stored in /etc/nginx directory (In some cases file might be present in other locations like, /usr/local/nginx/conf, /etc/nginx or /usr/local/etc/nginx etc).

Directives

Configuration file filled with directives and their parameters. Simple (single line) directives are very similar to key..value. Multi-line directives are block of configuration, which present within braces ( { } ). Every directives ends with semi-colon. Some simple directives are as below.

user
nobody;
error_log
logs/error.log notice;
wordker_process
1;

 

Feature specific files

It is advisable to use feature specific configuration file and use them in main configuration file using “include” directive. For example we want to configure separately for http and tcp/udp. In that case its always advisable to create two files one for http and another one is for stream (stream is used for tcs/udp methods). Once we have two configuration file created we can include both in main configuration file. Lets say we have created two files and stored into /etc/nginx/conf.d directory. Similar to below example.

http {

server {
}

}

stream {

server {
}

}

include conf.d/http;
include conf.d/stream;
  1. This file (/etc/nginx/conf.d/http) keeps configuration of http method.
  2. This file (/etc/nginx/conf.d/stream) keeps configuration of tcp/udp method.
  3. This is the main configuration file nginx.conf. In this file we included both the method specific files. This is very helpful to keep things separate and clean from manageability point of view.

What we want to achieve?

Above diagram is a conventional load balancer scenario. Client can only access the load balancer IP and can’t access underneath web servers directly. That way Load Balancer can control the fulfillment of the client request.

Today we will see how we can configure NGINX/NGINX PLUS as Load Balancer. We will start from installing NGINX/NGINX PLUS.



Installing nginx?

First thing to do is to install NGINX in a separate node. In this discussion we will restrict ourselves in installing NGINX in Ubuntu. But NGINX package is available for CentOS, Debian operating system etc.

sudo apt-get update /* this is to update existing packages */
sudo apt-get install nginx

Once you are done with installation change your directory to nginx directory.

cd /etc/nginx/

In Ubuntu virtual host is stored in /etc/nginx/sites-available directory. By default you will see one file already present in that location named “defaut” (this is for Ubuntu, similar file may present in other OS also). You would have noticed another directory of similar name /etc/nginx/sites-enabled directory. In this place enabled site’s symbolic links are stored. If you want to create a symbolic link of your host, then follow below step. <vhost> is the name of your virtual host name.

sudo ln -s /etc/nginx/sites-available/<vhost> /etc/nginx/sites-enabled/<vhost>

Once you are done with the changes in default virtual host file, we will try to access the site. To do so we should restart the nginx service. We will test whether HTTP response received from the server or not. Ideally we should get “Welcome to nginx!” message as default page response.

Configure nginx as Load Balancer

Since our nginx site is running fine, means we did good job in configuring nginx default virtual host. Now we will be configuring nginx as load balancer, which is the main purpose of the article. We can change directory to /etc/nginx/sites-available.

upstream www {
server 10.1.0.101;
server 10.1.0.102;
server 10.1.0.103;
}

server {

listen 80 default_server;
listen [::]:80 default_server;
index index.html index.htm default.html default.htm
server_name 10.1.0.100;

location / {

proxy_pass http://www;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘upgrade’;

}

}

In above configuration, we must set upstream directive. In this directive we set all web server node IP address and port. So all web server nodes are listening to configured address. We are assuming NGINX/NGINX PLUS load balancer are installed in separate node. We will deny all port access other than configured ports. Once the configuration done we need to restart nginx service. Once service is up and running and we try to access the load balancer url. We will see below response. All three section of the image are from three web server nodes. Please note that in NGINX Load Balancer, round-robin is the default resource access load balancer strategy. In subsequent section we will see how this can be changed.

https://www.atlantic.net/wp-content/uploads/2018/01/balance4.png

Anyway so far so good. Our NGINX/NGINX PLUS load balancer running fine. Now we will see how to configure load balancing method.



Different methods of nginx as Load Balancer

By default Round Robin is the default load balancing method used. We can change this method to support other load balancing mechanism. Open source NGINX supports four load-balancing methods, and NGINX PLUS adds two more methods.

  1. Round Robin: Requests are evenly distributed across the servers. Server weights is another attributes taken into consideration. Please note that Round Robin is the default method used in NGINX Load Balancer.

upstream backend {

# no load balancing method is specified for Round Robin
server node1.server.com
server node2.server.com

}

  1. Least Connections: Request is sent to server having less number of active connections, again server weights taken into consideration.

upstream backend {

# no load balancing method is specified for Least Connections
least_conn;
server node1.server.com;
server node2.server.com;

}



  1. IP Hash: This method is used to send request to same server, where previous request from same client (with same client IP) is being served. This approach used to have sticky connection (sticky is another directive which used to handle session in NGINX PLUS. But this is only available in NGINX PLUS and not in NGINX OSS). This is very helpful when web server used in memory session.

upstream backend {

# no load balancing method is specified for IP Hash
least_conn;
server node1.server.com;
server node2.server.com;

}

  1. Generic Hash: Request will be transferred to server based on the computed hash of user defined key. This key could be text string, variable or combination of it. For example the key might be source IP address and port, or a URI as in this example.

upstream backend {

# no load balancing method is specified for Generic Hash
hash $request_uri consistent;
server node1.server.com;
server node2.server.com;

}

  1. Least Time (NGINX PLUS Only): NGINX PLUS selects the server based on lowest average latency and lowest number of active connections. Using least_time directive parameters its being handled. Below are three possible parameters.
    1. header: Time to receive the first byte from the server.
    2. last_byte: Time to receive the full response from the server.
    3. last_byte inflight: Time to receive full response from the server, taking into account incomplete requests.

upstream backend {

least_time header;
server node1.server.com;
server node2.server.com;

}

  1. Random (NGINX PLUS Only): Each request will be passed to a random selected server.

Server Weights

By default NGINX/NGINX PLUS distributes request among servers based on their weights using Round Robin method. The weight parameter to the server directive sets the weight of a server. Default value of weight is 1. More value means more priority.

upstream backend {

server node1.server.com weight=5;
server node2.server.com;

}

Conclusions

If you are planning to enhance performance of your web application, then you should seriously consider implementing load balancer. If you want simple implementation of load balancer then NGINX/NGINX PLUS is perfect choice for you. We can refer NGINX web site for the further documents.