HeartBeat V1 + LDIRECTORD implementation LVS cluster is available

Foreword

High Availability Cluster, or HA Cluster for short, refers to server cluster technology for the purpose of reducing service interruption time. It can be seen from the above that the LVS cluster itself cannot achieve high availability. For example, the Director Server cannot detect the health of the Real Server. Once one or all of them are Real Server is down, Director Server will continue to forward requests, causing the site to be inaccessible. Similarly, if Director Server downtime site is even more unlikely to function normally. This article will explain how to implement LVS cluster high availability based on heartbeat v1.

Heartbeat

Introduction

Heartbeat is a Linux-HA project One of the components, since 1999, has released many versions, which is the most successful example of the current open source Linux-HA project, and has been widely used in the industry.

Working principle

Heartbeat’s core consists of two parts, the heartbeat monitoring part and the resource takeover part. Heartbeat monitoring can be carried out through the network link and the serial port, and supports redundant links. Send messages to each other to tell the other party about their current status. If the message sent by the other party is not received within the specified time, then the other party is considered to be invalid. At this time, the resource takeover module needs to be activated to take over the resources running on the other party’s host. Or service.

Achieve high availability of LVS cluster based on Heartbeat v1

< span style="font-family:'Microsoft YaHei','Microsoft YaHei';color:rgb(0,112,192);">Experimental topology

Experimental environment

node1:node1.scholar.com 172.16. 10.123 CentOS6.6

node2:node2.scholar.com 172.16.10.124 CentOS6.6

Real Server1:192.168.1.10 (VIP) 172.16.10.125 (RIP) CentOS6.6

Real Server2: 192.168.1.10 (VIP) 172.16.10.126 (RIP) CentOS6.6

Notes

The premise of configuring a high-availability cluster: ( Take the two-node heartbeat as an example)

①Time must be synchronized

Use ntp server

②Nodes must communicate with each other by name

Resolve node names

/etc/host

The host name used in the cluster is the host name represented by `uname -n`

③ping node

Required only for even nodes

④ssh Key authentication for communication

Configuration process

Time synchronization

span>

Please ensure that the time of the two nodes is synchronized, here No more detailed description

Analysis name configuration

[root@node1 ~]# vim/etc/hosts172.16.10. 123 node1.scholar.com node1172.16.10.124 node2.scholar.com node2[root@node1~]#vim/etc/sysconfig/networkHOSTNAME=node1.scholar.com[root@node1~]#uname-nnode1.scholar. com# Both nodes need to be operated as above

ssh key Configuration

[root@node1~]# ssh-keygen-trsa-P''[root@node1~]#ssh-copy -id -i.ssh/id_rsa.pub root@node2[root@node2~]# ssh-keygen-trsa-P''[root@node2~]#ssh-copy-id-i.ssh/id_rsa.pub root@node1[root@node1~]# date; ssh node2'date' #Test Sun Jun 7 17:46:03 CST 2015 Sun Jun 7 17:46:03 CST 2015

Install the required software package

#Resolve dependencies[root@node1 ~]#yuminstallperl-TimeDatenet-snmp- libs libnet PyXML -y #Need epel source support[root@node1~]#cd heartbeat2[root@node1heartbeat2]#lsheartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6 .x86_64.rpmheartbeat-gui-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpmheartbeat-ldirectord- 2.1.4-12.el6.x86_64.rpm[root@node1 heartbeat2]# rpm-ivh heartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm heartbeat -stonith-2.1.4-12.el6.x86_64.rpm[root@node1 heartbeat2]# yum install heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm -y# Both nodes perform the above operations 

Configure heartbeat

Prepare the configuration file

wKiom1V0FqTgle5QAAGXlsgYRJc353.jpg

Configure algorithm key

[root@node1 ~]#openssl rand-hex 84d8fd6cb49d2047b[root@node1~]#vim/etc/ha.d/authkeysauth22sha14d8fd6cb49d2047b

Configure the main configuration file

[root@ node1~]#grep-v"#"/etc/ha.d/ha.cf|grep-v"^$"logfile /var/log/ha-log#Log storage location keepalive 2#Specify the heartbeat use interval deadtime 30 Broadcast #Designated backup node to take over the service resource timeout time of the master node warntime 10 #Specify the time of the heartbeat delay initdead 120 #Resolve network start delay udpport 694 #694 0cast 0 m #1 694 broadcast port # 694 0 m # 694 Define whether the service will be automatically switched back to node node1.scholar.com when the master node is restored #Master node node node2.scholar.com #Standby node ping 172.16.0.1 #arbitration equipment

Configuration Explorer

[root@node1 ~ ]# vim /etc/ha.d/haresources node1.scholar.com 192.168.1.10/32/eth0/192.168.1.10 ldirectord::/etc/ha.d/ldirectord.cf

pass the configuration file to the backup node

wKioL1V0Lq3xOD_fAACQCT4yHWQ820.jpg

Configure ldirectord

Prepare the configuration file and configure

[root@node1 ~]# cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf/etc/ha.d/[root@node1~]#grep- v ^# /etc/ha.d/ldirectord.cfche cktimeout=3 #Detection timeout time checkinterval=1 #Detection interval time autoreload=yes #Modify the configuration file, and reload without restarting the service quick=yes #real server Deleted from the lvs list after downtime, and automatically added to the list after recovery virtual=192.168.1.10:80 #VIP real=172.16.10.125:80 gate #real server real=172.16.10.126:80 gate #real server fallback=127.0.0.1:80 gate #If the RS nodes are all down, enable local Loopback port address service=http protocol detection request=".health.html" #detection file receive="ok" #Detection content, judge whether RS ​​is alive scheduler=rr=255 #Scheduling algorithm #persist=255 .255.255

Transfer the configuration file to the standby node, and prohibit each node ldirectord from enabling self-starting

wKiom1V0OBvBwNFXAACdfjGKbyY687.jpg

Prepare the fallback file

wKiom1V0OSOxpbinAAC5nE7BN0g870.jpg

RS settings

#The two RSs are configured as follows

Configure kernel parameters

wKiom1V0OxvDgkaeAAE6-04NgWc300.jpg

Prepare health detection files and site files

wKioL1V0Q8jiBI9lAACQeXL5www296.jpg

Test page

< img onload="if(this.width>650) this.width=650;" src="/wp-content/uploadshttp:/img.voidcn.com/vcimg/static/loading.png" title="7.jpg "alt="wKioL1V0Q-bSXEvkAABeL0Q43g0048.jpg" d="4196266" s="b7d_2fd" t="jpg">

Start heartbeat

wKiom1V0RK2TGvYnAAC-_byU28U938.jpg

Check if the resource is effective

wKioL1V0Rv2jERmvAAOLwCpdcbw390.jpg

High availability test

wKioL1V0SIjASolSAACUB_JEui4425.jpg

Refresh the page

wKiom1V0RwTgnrwZAACegWfGg5I510.jpg

Simulate the downtime of RS2

#service httpd stop

 wKiom1V0SBTD7swPAACVG94jDVk544.jpg

No matter how you refresh it, it is the RS1 page

Next we will also stop RS1

wKiom1V0SL-BYAGaAACToxTPusY964.jpg

Switch back tofallback page

We stop node1 and see if node2 can take over resources

[root@node1 ~]# service heartbeat stop

wKiom1V0TGDyPkBbAAOn1aOYEWU380.jpg

The resource is successfully taken over, and the above access will not be affected in any way. Of course, if node1 is started this time, the resource will be grabbed again In the past, there will be no demonstration here

The high availability of LVS based on heartbeat v1 is now complete

The end

Well, This is the end of the implementation of LVS high availability based on heartbeat v1. The whole process is quite fun. You can leave a message if you encounter problems during the configuration process. The next article will explain it based on Heartbeat v2 high-availability cluster, you can continue to pay attention if you are interested. The above is only for personal study, if there are mistakes or omissions, please don’t spray~~~

Leave a Comment

Your email address will not be published.