Disclaimer: The following are just some personal summaries and accompanying writing. If there is something wrong, please also point out the netizens who have seen it, so as not to be misleading. Three grams of oil! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
Preface:
First of all, we have to know what a cluster is: a loosely coupled multiprocessor system with N servers (externally, they are just one Server), and communicate between them through the network. Let N servers cooperate with each other to jointly carry the request pressure of a website. In the cluster server architecture, when the primary server fails, the backup server can automatically take over the work of the primary server and switch over in time to achieve uninterrupted service to users.
The working principle of heartbeat (Linux-HA): The core of heartbeat consists of two parts, the heartbeat monitoring part and the resource takeover part. Heartbeat monitoring can be performed through network links and serial ports, and supports redundant links They send messages to each other to tell each other their current status. If the message sent by the other party is not received within the specified time, then the other party is considered to be invalid. At this time, the resource takeover module needs to be activated to take over the operation on the other party. Resources or services on the host.
************************Dry goods are coming************* ******************************
Environment: Red Hat Enterprise 6.5 system, turn off iptables And selinux
Node1:172.25.28.1(vm1.example.com)
Node2:172.25.28.4(vm4.example.com)
First, please modify the /etc/hosts analysis file on node1 and node2 respectively
172.25.28.4 vm4.example.com
172.25.28.1 vm1.example.com
Download the rpm package required for the experiment: heartbeat-3.0.4-2.el6.x86_64.rpm heartbeat-devel-3.0.4-2.el6.x86_64. rpm ldirectord-3.9.5-3.1.x86_64.rpm heartbeat-libs-3.0.4-2.el6.x86_64.rpm
Then install the software package on the two node nodes: yum localinstall *.rpm (perform this operation in the directory where the rpm package is located)
The specific configuration process is as follows:
[root@vm1 ha.d]# cd /etc/ ha.d
[root@vm1 ha.d]# less README.config View the README file under the main configuration file
Ha.cf Main configuration file (Main configuration file File) haresources Resource configuration file (resource configuration file) authkeys Authentication information (Verification information) rpm -q heartbeat -d (Find the specific path of the configuration file) [root@vm1 ha .d]# rpm -q heartbeat -d
/usr/share/doc/heartbeat-3.0.4/authkeys /usr/share/doc/heartbeat-3.0.4/ha.cf /usr/ share/doc/heartbeat-3.0.4/haresources
[root@vm1 ha.d]# cp /usr/share/doc/heartbeat-3.0.4/authkeys /usr/share/doc/heartbeat -3.0.4/ha.cf /usr/share/doc/heartbeat-3.0.4/haresources ./ Copy the configuration file to the main configuration directory
[root@vm1 ha.d]# vim ha.cf p>
34 logfacility local0
48 keepalive 2
56 deadtime 30
61 warntime 10
71 initdead 60
76 udpport 694
91 bcast eth0 # Linux
157 auto_failback on
211 node vm1.example.com
212 node vm4.example.com
220 vm4.example.com
220 ping 172.25.28.42
253 ping 172.25.28.42
253 ping 172.25.28.42
heart/heartbeat hail/64. Be sure to write the lib64 path correctly, because it has been ignored before, which caused a long time of pain and cannot start the heatbeat service
[root@vm1 ha.d]# grep -v ^# ha.cf logfacility < br>
local0 keepalive 2 Heartbeat frequency, set by yourself. 1: means 1 second; 200ms: means 200 milliseconds
deadtime 30 The node death time threshold, that is, if the slave node has not received a heartbeat after 30, it is considered that the master node is dead, and it is set by yourself
p>
warntime 10 Issue a warning time, set it by yourself
initdead 60 After the daemon is started for the first time, it should wait 60 seconds before starting the resources on the main server
udpport 694 The udp port for heartbeat information transmission. Use port 694 for bcast and ucast communication. Use the default value to not conflict with others.
bcast # Linux eth0 eth0
auto_failback on Whether to automatically switch back after the master node is restored
node vm1.example.com The name of the master node is consistent with uname -n. The first one is the primary node by default, so don’t take measures.
node vm2.example.com The name of the secondary node, which is consistent with uname –n
ping 172.25 .28.250 Load the ping module
respawn hacluster /usr/lib64/heartbeat/ipfail The lib64 path must be written correctly here, because it was ignored before, it caused a long time of pain and the heatbeat service could not be started
apiauth ipfail gid=haclient uid=hacluster By default, heartbeat does not detect any services other than itself, nor does it detect network conditions. Therefore, when the network is interrupted, the switch between Load Balancer and Backup will not be performed. You can use the ipfail plug-in to set’ping nodes’ to solve this problem, but you cannot use a cluster node as a ping node.
[root@vm1 ha.d]# vim haresources Modify resource files
vm1.example.com IPaddr::172.25.28.100/24/eth0 mysqld
[root@vm1 ha.d]# vim authkeys Modify authentication information
#
auth 1
1 crc
#2 sha1 HI!
#3 md5 Hello!
Install the mysqld service, do not open it, the heartbeat service will automatically open and modify the authentication File permissions: [root@vm1 ha.d]# chmod 600 authkeys
Copy the configuration file to another node2 node:
[root@vm1 ha.d ]# scp haresources authkeys ha.cf 172.25.28.4:/etc/ha.d/
Enable the heartbeat service:
[root@vm1 ha.d]# / etc/init.d/heartbeat start
Starting High-Availability services: INFO: Resource is stopped Done.
If something goes wrong, just watch the log tail -f /var/log/message tail -f /var/log/ha-log Now the service is running on the primary node. If the heartbeat on the primary node goes down, the service will automatically switch to the secondary node. If the primary node returns to normal , The service is taken back. After the heartbeat is turned on, the mysql service will start automatically, and when the heartbeat is turned off, the mysql service will be turned off.
Use ip addr to check whether the virtual ip 172.25.28.100 is enabled
Done! ! ! ! ! ! ! ! !