Construction: LVS + Keepalived High Available Web Services Cluster Environment

The service involves many technologies. For specific explanations of related technical documents, please refer to the following link:

Centos 7 detailed explanation of load balancing configuration based on DR (direct routing) mode;

p>

Centos 7 detailed explanation of load balancing configuration based on NAT (address translation) mode;

Detailed explanation of LVS load balancing cluster;

Combined with the above blog posts, you can build keepalived+ A highly available web cluster in DR/NAT mode, This blog post uses a keepalived+DR environment to build a highly available web service cluster.

This blog post is mainly configured and configured according to the production environment. It can be copied and the environment is as follows:

Build: LVS+Keepalived high availability Web service cluster environment
1. Environmental analysis:

1. The two schedulers and the two web nodes use the same network segment address and can directly communicate with the external network. In order to share the security of storage

, web nodes and storage servers are generally planned to an intranet environment, so web nodes must have two or more network card interfaces.

2. I have limited resources here, and for the convenience of configuration, there are only two schedulers and web nodes respectively. In the case of web access, please

If the quantity is not large, it is enough However, if the access request is relatively large, then at least three schedulers and

web nodes must be configured separately. If there are only two web nodes, the traffic volume is relatively large, so once one of them is down , Then the rest

A single seedling will definitely be killed because it can’t handle the surge in access requests.

3. Prepare a system image to install related services.

4. Configure your own firewall policy and IP addresses other than VIP (I directly turned off the firewall here).

5. Keepalived will automatically call the IP_vs module, so there is no need to load it manually.

Second, the final effect:

1. The client accesses the VIP of the cluster multiple times, and the same web page is obtained.

2. After the Master scheduler goes down, the VIP address of the cluster will automatically drift to the Slave (backup) scheduler. At this time, all

The scheduling tasks are assigned by from the scheduler. When the main scheduler resumes operation, the VIP address of the cluster will be automatically transferred back to the main scheduler

, the main scheduler will continue to work and switch back to the backup state from the scheduler.

3. After the web node is down, it will be detected by the keepalived health check function, so as to automatically remove the down node from the web node pool.

After the node resumes operation, it will be automatically added to the web node pool.

Three, start to build:

1, configure the main scheduler (LVS1):

 [[emailprotected] ~]# yum -y install keepalived ipvsadm #Install the required tools
[[emailprotected] ~]# vim /etc/sysctl.conf #Adjust the kernel parameters and write the following three lines
.....................
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default. send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[[emailprotected] ~]# sysctl -p #Refresh it
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[[email protected] ~]# cd /etc/keepalived/< br />[[email Protected] keepalived]# cp keepalived.conf keepalived.conf.bak #Backup configuration file
[[emailprotected] keepalived]# vim keepalived.conf #Edit keepalived configuration file
< br />#The lines marked below need to be changed. Lines that are not marked can be kept as default

! Configuration File for keepalived

global_defs {
notification_email {
[email protecte d] #Recipient address (if you don’t need it, you don’t need to modify it)
[email protected]
[email protected]
}
notification_email_from [emailprotected] #sender Name, address (can not be modified)
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL1 #Change the name of this server. It must be unique among all the scheduler names in the cluster.
}

vrrp_instance VI_1 {
state MASTER #Set as master scheduler
interface ens33 #The physical network card interface that carries the VIP address is changed according to the actual situation.
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
200.0. 0.100 #Specify the drifting IP address (VIP), there can be more than one.
}
}

virtual_server 200.0.0.100 80 {#Change to VIP address and required port
delay_loop 6
lb_algo rr #Change it as needed Load scheduling algorithm, rr stands for polling.
lb_kind DR #Set the working mode to DR (direct routing) mode.
! persistence_timeout 50 #In order to test and see the effect for a while, add "!" before the line to keep the connection and comment out the line.
protocol TCP

real_server 200.0.0.4 80 {#The configuration of a web node, real_server 200.0.0.4 80 {.....} is the copy below. After the copy is over, just change the node IP address.
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}

real_server 200.0.0.3 80 {#The configuration of a web node, after the modification is completed, copy several real_serve 200.0.0.3 80 {.....} to the top of the line for several nodes, it is best not to Paste it below to prevent the braces from being lost.
weight 1
TCP_CHECK {
connect_port 80 #Add this line to configure the connection port
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}

#There are many configuration items below. I have 98 lines here. Delete all of them. If you don’t delete them, restart the service and you may get an error.
[[email Protected] ~]# systemctl restart keepalived #Restart the service
[[email Protected] ~]# systemctl enable keepalived #Set the boot to start automatically

So far, The master scheduler is configured! ! !

2. Configure the slave scheduler (LVS2):

[[emailprotected] ~]# yum -y install ipvsadm keepalived # Tools required for installation
[[emailprotected] ~]# scp [emailprotected]:/etc/sysctl.conf /etc/
#Copy the /proc parameter file of the main scheduler.
[emailprotected] s password: #Enter the user password of the main scheduler
sysctl.conf 100% 566 205.8KB/s 00:00
[[emailprotected] ~]# sysctl -p #refresh
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[ [email protected] ~]# scp [email protected]:/etc/keepalived/keepalived.conf /etc/keepalived/
#Copy the keepalived configuration file of the main scheduler and make slight changes.
[email Protected] s password: #Enter the user password of the main scheduler
keepalived.conf 100% 1053 2.5MB/s 00:00
[[emailprotected] ~]# vim / etc/keepalived/keepalived.conf #Edit the copied configuration file
#If both servers are ens33 network cards, then only the following three items are required (other items remain default):
router_id LVS_DEVEL2 #Change the route_id to a different one. The route_id must be unique.
state BACKUP #The state is changed to BACKUP, pay attention to case.
priority 90 #Priority is smaller than that of the main scheduler and cannot conflict with the priority of other backup schedulers.
[[email Protected] ~]# systemctl restart keepalived #Start the service
[[email Protected] ~]# systemctl enable keepalived #Set the boot to start automatically

So far, The slave scheduler is also configured. If you need to deploy multiple slave schedulers, follow the above configuration of the slave (backup) scheduler.

3. Configure the web1 node:

[[email protected] ~]# cd /etc/sysconfig/network-scripts/< br />[[email protected] network-scripts]# cp ifcfg-lo ifcfg-lo:0 #Copy a configuration file of the loopback address
[[emailprotected] network-scripts]# vim ifcfg-lo: 0 #Edit the loopback address to host the VIP of the cluster.
DEVICE=lo:0 #Change the network card name
IPADDR=200.0.0.100 #Configure the VIP of the cluster
NETMASK=255.255.255.255 #The subnet mask must be 4 255.
ONBOOT=yes
#Keep the above four lines of configuration items, and delete the redundant ones.
[[emailprotected] network-scripts]# ifup lo:0 #Start the loopback interface
[[emailprotected] ~]# ifconfig lo:0 #Check if the VIP configuration is correct
lo: 0: flags=73 mtu 65536
inet 200.0.0.100 netmask 255.255.255.255
loop txqueuelen 1 (Local Loopback)
[[emailprotected] ~]# route add -host 200.0.0.100 dev lo:0 #Add VIP local access routing record
[[emailprotected] ~]# vim /etc/rc.local #Set up automatic boot, add this routing record
................................
/sbin/route add -host 200.0.0.100 dev lo:0
[[email Protected] ~]# vim /etc/sysctl.conf #Adjust the /proc response parameters and write the following six lines
............... ....
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1< br />net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[[ email protected] ~]# sysctl -p #Refresh it
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[[email protected] ~]# yum -y install httpd #Install http service
[[email protected] ~]# echo 111111111111> /var/www/html/index.html #Prepare to test web files
[[ email protected] ~]# systemctl start httpd #Start the http service
[[emailprotected] ~]# systemctl enable httpd #Set the boot to start automatically

So far, the first web node The configuration is complete.

4. Configure the web2 node:

[[emailprotected] ~]# scp [emailprotected]:/etc/sysconfig/ network-scripts/ifcfg-lo:0 /etc/sysconfig/network-scripts/ 
#Copy the lo:0 configuration file of the web1 node
The authenticity of host '200.0.0.3 (200.0.0.3)' can't be established.
ECDSA key fingerprint is b8:ca:d6:89:a2:42:90:97:02:0a:54:c1:4c:1e:c2:77.
Are you sure you want to continue connecting (yes/no)? yes #Enter yes
Warning: Permanently added '200.0.0.3' (ECDSA) to the list of known hosts.
[email Protected]' s password: #Enter the password of the web1 node user
ifcfg-lo:0 100% 66 0.1KB/s 00:00
[[emailprotected] ~]# ifup lo:0 #Enable lo:0
[[email Protected] ~]# ifconfig lo:0 #Confirm VIP is correct
lo:0: flags=73 mtu 65536
inet 200.0.0.100 netmask 255.255 .255.255
loop txqueuelen 1 (Local Loopback)
[[emailprotected] ~]# scp [emailpro tected]:/etc/sysctl.conf /etc/ #Copy the kernel file
[email protected]'s password: #Enter the password of the web1 node user
sysctl.conf 100% 659 0.6KB/s 00 :00
[[email Protected] ~]# sysctl -p #Refresh it
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net .ipv4.conf.lo.arp_announce = 2
[[emailprotected] ~]# route add -host 200.0.0.100 dev lo:0 #Add VIP local access routing record
[[emailprotected] ~ ]# vim /etc/rc.local #Set up automatic boot, add this route record
........................... .....
/sbin/route add -host 200.0.0.100 dev lo:0
[[email protected] ~]# yum -y install httpd #Install http service
[[ email protected] ~]# echo 22222222222> /var/www/html/index.html #prepare to test web files
[[emailprotected] ~]# systemctl start httpd # Start http service
[[email Protected] ~]# systemctl enable httpd #Set the boot to start automatically

So far, web2 is also configured, now use the client to test whether the cluster Effective:

5. Client access test:

Build: LVS+Keepalived high-availability Web service cluster environment

Build: LVS+Keepalived high-availability Web service cluster environment

If you are visiting the same page, you can open it after troubleshooting the configuration error Multiple web pages, or refresh after a while, because it may have a time to keep the connection, so there will be a delay.

For testing, different webpage files are prepared on each web node to test whether there is a load balancing effect. Now the effect is there, so we need to build a shared storage server. All of the web nodes read the web page file from the shared storage server and provide it to the client, so as to provide the same web page file to the client.

The following is to build a simple shared storage server, if you need to build a highly available storage server, you can follow my blog: warrent, I will write how to build a highly available in a future blog post Storage server.

6. Configure NFS shared storage server:

[[emailprotected] /]# yum -y install nfs-utils rpcbind #Install related software packages 
[[email Protected] /]# systemctl enable nfs #Set to self-start on boot
[[email Protected] /]# systemctl enable rpcbind #Set to self-start on boot
[[emailprotected ] /]# mkdir -p /opt/wwwroot #Prepare the shared directory
[[emailprotected] /]# echo www.baidu.com> /opt/wwwroot/index.html #New web file
[[email protected] /]# vim /etc/exports #Set the shared directory (the file content is empty by default)
/opt/wwwroot 192.168.1.0/24(rw,sync,no_root_squash) #write this line
[[email Protected] /]# systemctl restart rpcbind #Restart related services, pay attention to the order in which the services are started
[[emailprotected] /]# systemctl restart nfs
[[emailprotected ] /]# showmount -e #View the local shared directory
Export list for NFS:
/opt/wwwroot 192.168.2.0

7, all web nodes hang Load shared storage server:

1) Web1 node server configuration:

[[emailprotected] ~]# showmount -e 192.168. 1 .5 #View the shared directory of the storage server
Export list for 192.168.1.5:
/opt/wwwroot 192.168.1.0/24
[[emailprotected] ~]# mount 192.168.1.5: /opt/wwwroot /var/www/html/ #Mount to the root directory of the webpage
[[emailprotected] ~]# df -hT /var/www/html/ #Confirm that the file is successfully mounted
System Type Capacity Used Available Used% Mount Point
192.168.1.5:/opt/wwwroot nfs4 39G 5.5G 33G 15% /var/www/html
[[emailprotected] ~]# vim /etc/fstab #Set up automatic mounting
.........................
192.168.1.5:/opt/wwwroot /var/www/html nfs defaults,_netdev 0 0
#Write the above content

2) Web2 node server configuration:

[ [email protected] ~]# showmount -e 192.168.1.5 #View the shared directory of the storage server
Export list for 192.168.1.5:
/opt/wwwroot 192.168.1.0/24
[[ email protected] ~]# mount 192.168.1.5:/opt/wwwroot /var/www/html/ #Mount to the web root directory
[[emailprotected] ~]# df -hT /var/www/html / #Confirm that the mount is successful
File system type capacity has been used, available and used% Mount point
192.168.1.5:/opt/wwwroot nfs4 39G 5.5G 33G 15% /var/www/html
[[emailprotected] ~]# vim /etc/fstab #Set up automatic mounting
.........................
192.168.1.5:/opt/wwwroot /var/www/html nfs defaults, _netdev 0 0
#Write the above content

8. The client accesses the test again:

This time, no matter how the client refreshes, Both will see the same web page, as follows:

Build: LVS+Keepalived high-availability Web service cluster environment

9. Change the keepalived configuration files on all master/slave schedulers to apply to the production environment:

< strong>1) Main scheduler:

[[emailprotected] ~]# vim /etc/keepalived/keepalived.conf 
.......... ....... #Omit part of the content
virtual_server 200.0.0.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50 #will be added before Exclamation mark "! "Delete.
protocol TCP
................. #Omit part of the content
}

2 ) From the scheduler:

[[email protected] ~]# vim /etc/keepalived/keepalived.conf 
............. .... #Omit part of the content
virtual_server 200.0.0.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50 #Exclamation mark added before "! "Delete.
protocol TCP
....... #Omit part of the content
}

10 , Attach some viewing commands:

1) On which scheduler the VIP is located, query the physical interface of the scheduler that carries the VIP address, and you can see the VIP address (the VIP address is in the backup scheduling Can’t be found on the device):

[[email Protected] ~]# ip a show dev ens33 #Query the physical network card that carries the VIP address ens33
2: ens33: ate UP groupn 1000
link/ether 00:0c:29:77:2c:03 brd ff:ff:ff:ff:ff:ff
inet 200.0.0.1/24 brd 200.0 .0.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 200.0.0.100/32 scope global ens33 #VIPAddress.
valid_lft forever preferred_lft forever
inet6 fe80::95f8:eeb7 :2ed2:d13c/64 scope link noprefixroute
valid_lft forever preferred_lft forever

2) Check which web nodes are there:

[[emailprotected ] ~]# ipvsadm -ln #Query web node pool and VIP
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 200.0.0.100:80 rr
200.0.0.3:80 Route 1 0 0
200.0.0.4:80 Route 1 0 0

3) Simulate the downtime of the Web2 node and the main scheduler, and query the VIP and web nodes again on the backup scheduler:

[[emailprotected] ~]# ip a show dev ens33 #You can see that the VIP address has been transferred to the backup scheduler
2: ens33:
link/ether 00:0c:29:9a:09: 98 brd ff:ff:ff:ff:ff:ff
inet 200.0.0.2/24 brd 200.0.0.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 200.0.0.100/32 scope global ens33 #VIP address.
valid_lft forever preferred_lft forever
inet6 fe80::3050:1a9b:5956:5297/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[[email Protected] ~]# ipvsadm- After the ln #Web2 node is down, it cannot be found.
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 200.0.0.100:80 rr
-> 200.0.0.3:80 Route 1 0 0

#When the main scheduler or Web2 node returns to normal, it will be automatically added to the cluster and run normally.

4) View the log messages when the scheduler fails over:

[[emailprotected] ~]# tail -30 /var/log/messages 

Live together…

Leave a Comment

Your email address will not be published.