As mentioned in the article on high-concurrency load balancing, when companies solve high-concurrency problems, they generally have two processing strategies. Software, hardware, and hardware add load balancers to distribute a large number of requests. The above solution can be added at the high concurrency bottleneck: database + web server. The most commonly used load addition solution in the front layer of the web server is to use nginx to achieve load balancing.
1. The role of load balancing
1. Forwarding function
According to a certain algorithm [weight, polling], the client request is forwarded to different applications On the server, reduce the pressure on a single server and increase the system concurrency.
2. Fault removal
By means of heartbeat detection, it is judged whether the application server is currently working normally. If the server is down during the period, the request will be automatically sent to other application servers.
3. Resuming addition
If a failed application server is detected to resume work, it will be automatically added to the team for processing user requests.
Second, Nginx achieves load Balance
Similarly use two tomcats to simulate two application servers, with port numbers 8080 and 8081 respectively
1, Nginx load distribution strategy
Nginx’s upstream currently supported distribution Algorithm:
1), polling-1:1 processing requests in turn (default)
Each request is assigned to different application servers one by one in chronological order. If the application server is down, it will be automatically removed , And the rest continue to poll.
2). Weight——you can you up
By configuring the weight, you can specify the polling probability. The weight is proportional to the access ratio, which is used in the case of uneven performance of the application server.
3), ip_hash algorithm
Each request is allocated according to the hash result of the access ip, so that each visitor has a fixed access to an application server, which can solve the problem of session sharing.
2. Configure Nginx load balancing and distribution strategy
It can be achieved by adding specified parameters after the application server IP added in the upstream parameters, such as:
Through the above configuration, It can be realized that when accessing the 8080.max.com website, because the proxy_pass address is configured, all requests will first pass through the nginx reverse proxy server. When the server forwards the request to the destination host, read upstream as the address of tomcatsever1, read Take the distribution strategy and configure the weight of tomcat1 to 3, so nginx will send most of the requests to tomcat1 on the 49 server, which is port 8080; a small part is given to tomcat2 to achieve conditional load balancing. Of course, this condition is server 1. The hardware index of 2 is capable of handling requests.
3, nginx other configuration
upstream myServer {
server 192.168.72.49:9090 down;
server 192.168.72.49:8080 weight=2;
server 192.168.72.49:6060;
server 192.168.72.49:7070 backup;
}
1)down
Indicates that the server before the order does not participate in the load temporarily
< p>2) Weight
The default is 1. The greater the weight, the greater the weight of the load.
3) max_fails
The default number of allowable request failures is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module will be returned
4) fail_timeout
4) fail_timeout
p>
The pause time after max_fails failures.
5) Backup
When all other non-backup machines are down or busy, request the backup machine. So this machine will have the lightest pressure.
3. High availability using Nginx
In addition to achieving high availability of the website, that is, providing n multiple servers for publishing the same service, adding load balancing servers to distribute requests to Ensure that each server can process requests relatively saturated under high concurrency. Similarly, the load balancing server also needs to be highly available, in case if the load balancing server goes down, the subsequent application servers are also disordered and unable to work.
A solution to achieve high availability: add redundancy. Add n nginx servers to avoid the above single point of failure. The specific plan is detailed below: keepalive+nginx achieves load balancing and high availability
Four. Summary
To summarize, whether it is a solution of various software or hardware, load balancing is mainly Distribute a large number of concurrent requests to different servers according to a certain rule, thereby reducing the instantaneous pressure of a server and improving the anti-concurrency ability of the website. The reason why nginx is widely used in load balancing is the author believes that this is due to its flexible configuration. A nginx.conf file solves most of the problems, whether it is the creation of virtual servers by nignx, the reverse proxy server of nginx, or the nginx described in this article. Load balancing is almost all carried out in this configuration file. The server is only responsible for setting up nginx and running it. Moreover, it is lightweight and can achieve better results without occupying too much server resources.
Expansion: A server is down to realize automatic switching during load balancing
upstream tomcatserver1 {
server 192.168.72.49:8080 weight=3;
server 192.168.72.49:8081;
}
server {
listen 80;
server_name 8080.max.com;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
}
}
upstream tomcatserver1 {
server 192.168.72.49:8080 weight=3;
server 192.168.72.49:8081;
}
server {
listen 80;
server_name 8080.max.com;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
}
}
upstream tomcatserver1 {
server 192.168.72.49:8080 weight=3;
server 192.168.72.49:8081;
}
server {
listen 80;
server_name 8080.max.com;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://tomcatserver1;
index index.html index.htm;
}
}
upstream myServer {
server 192.168.72.49:9090 down;
server 192.168.72.49:8080 weight=2;
server 192.168.72.49:6060;
server 192.168.72.49:7070 backup;
}