HeartBeat V2 CRM implements NFS-based mysql high availability cluster

Foreword

Because the built-in resource manager haresource function of heartbeat v1 is relatively simple and does not support graphical management, Therefore, heartbeat v2 no longer supports haresource, and instead uses a more powerful resource manager crm for cluster management. This article will explain how to implement a mysql high-availability cluster based on nfs based on heartbeat v2 crm.

High availability implementation

Experimental topology

Experimental environment

node1:172.16.10.123 mariadb-5.5.36 CentOS6.6

< span style="font-family:'Microsoft YaHei','Microsoft YaHei';">node2:172.16.10.124 mariadb- 5.5.36 CentOS6.6

NFS: 172.16.10.125CentOS6.6

Windows environment needs to install Xmanager Enterprise 5

Configuration process

NFS server configuration span>

[root@scholar ~]#vim/etc/exports/mydata 172.16.0.0/16(rw,no_root_squash)[root@scholar~]#service nfsstart

Install mysql

#Do the following operations on node1 and node2 respectively

[root@node1 mysql]# vim /etc/mysql/my.cnf datadir=/mydata/datalog-bin=/mydata/binlogs/master-bin

[root@node1 mysql]# scripts/mysql_install_db--user=mysql--datadir=/mydata/data/#Note: the initialization operation can only be performed on one node

Start the service and test it

Similarly start the test after performing the above operations on node2

Uninstall the shared directory after the test is complete

[root@node1 mysql]#umount/mydata/# Execute on two nodes separately

< span style="font-family:'Microsoft YaHei','Microsoft YaHei' ;">For safety, you can remove the no_root_squash option of nfs at this time

[root@scholar~]#vim/etc/exports/ mydata 172.16.0.0/16(rw)[root@scholar~]# exportfs -arvexporting 172.16.0.0/16:/mydata

heartbeat configuration

Time synchronization

ntpdate

Node resolution communication

[root@node1~]#vim/etc/hosts 172.16.10.123 node1.scholar.com node1172.16.10 .124 node2.scholar.com node2 [root@node1~]#vim/etc/sysconfig/networkHOSTNAME=node1.scholar.com[root@node1~]#uname-nnode1.scholar.com#Two All nodes need to be operated as above

< span style="font-family:'Microsoft YaHei','Microsoft YaHei';">ssh key configuration

[root@ node1~]# ssh-keygen-t rsa -P''[root@node1~]# ssh-copy-id-i.ssh/id_rsa.pub root@node2[root@node2~]#ssh-keygen-trsa -P''[root@node2~]# ssh-copy-id -i.ssh/id_rsa.pub root@node1[root@node1~]# date;ssh node2'date'#Test Mon Jun 8 16:33: 36 CST 2015Mon Jun 8 16:33:36 CST 2015

Install the required software packages

#Resolving dependencies[root@node1 ~]# yum install perl-TimeDate net-snmp-libs libnet PyXML-y # need epel source support [root@node1 ~]# cd heartbeat2/[root@node1 heartbeat2] #Lsheartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpmheartbeat-gui-2.1.4-12.el6.x86_64.rpm heartbeat-stonit h-2.1.4-12.el6.x86_64.rpmheartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm[root@node1 heartbeat2]# rpm-ivh heartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpm heartbeat-gui-2.1.4-12.el6.x86_64.rpm# Both nodes Perform the above operation

Prepare the configuration file

[root@node1 ~]# cp /usr/share/doc/heartbeat-2.1.4/{ha.cf,authkeys}/etc/ha.d[root@node1 ~]# chmod 600/ etc/ha.d/authkeys

Configure algorithm key

[root @node1~]#openssl rand-hex 84d8fd6cb49d2047b[root@node1~]#vim/etc/ha.d/authkeysauth22sha1 4d8fd6cb49d2047b

Configure the main configuration file< /p>

#Configuration file is as follows[root@node1 ~]#grep-v"#"/etc/ha.d/ha.cf|grep-v"^ $"logfile/var/log/ha-log #Log storage location keepalive 2 #Specify the heartbeat use interval time deadtime 30 #Designated backup node to take over the main node service resource timeout timewarntime 10 #delayed network start time adde 120 Delay udpport 694 #Set the broadcast communication port mcast eth0 225.0.25.1 694 1 0 #Define the broadcast address auto_failback on #Define whether the service will be automatically switched back to the node after the master node is restored. #node1.nodecholar. #node1.nodecholar.com com #Backup nodeping 172.16.0.1 #arbitration equipment crm on #Enable crm

pass the configuration file to the backup node

A login password is required when the crm graphical interface is enabled, and it will be created automatically when heartbeat-gui is installed A user named hacluster can set the password of hacluster on any node.

Start heartbeat

View cluster status

[root@node1 ~]# crm_mon

Start the gui interface

[root@node1 ~]# hb_gui&

Node: node list Resources: resource list Constraints: restriction constraints Locations: location constraints, the tendency of resources to run on a node Orders: order constraints, when multiple resources belonging to the same service run on the same node, its startup and Closed order constraints Colocations: Arrangement constraints, define the tendency of resources with each other (whether they are together)

Resource type: primitive, native: main resource, its Can only run a certain node group: Group resources can be used to restrict multiple resources from running on the same node and manage these resources in a unified manner. Clone: ​​Clone resources, one resource can run on multiple nodes You should specify: the largest cloned share Number, the maximum number of clone master/slave that each node can run: master-slave resources, special clone resources

The high availability of mysql requires three resources: ip, mysqld, nfs, these three resources must be run on one node, and mysqld must be able to start after nfs is started, ip There is no precedence between the address and the mysqld service. It is also possible to start the ip after the service is ready. The highly available httpd service needs to start the IP first, because the resource of the IP address is clearly required when the service is started. The two types of highly available resources are different.

Add resources

First define a group

Add the first resource

Add a second resource

Add a third resource

< img onload="if(this.width>650) this.width=650;” src=”/wp-content/uploadshttp:/img.voidcn.com/vcimg/static/loading.png” title=”17.jpg “alt=”wKiom1V1b_qglm2sAAJyIU53JcM615.jpg” d=”4196291″ s=”130_a8c” t=”jpg”>

Add resource constraints

nfs and mysqld must be on the same node

ip must be on the same node with mysqld

After defining the arrangement constraints, these 3 resources will run on the same node at the same time. Note that the mysqld service To start after nfs is mounted, you need to define resource order constraints

After defining the order constraint, you can also define the position constraint, which can be defined to be more inclined to run in Which node

Add expression and preference for the node

Start all resources

Because of the preference for node1, the resource runs on node1, we authorize a user test on node1

Test on other clients

At this time, the simulation node1 hangs up, set node1 to standby mode

connect to the database again to test

data not If affected by the slightest impact, node1 will go online again, and the resources will return to node1 again. There will be no demonstration here. So far, the implementation of the nfs-based mysql high-availability cluster based on heartbeat v2 crm is completed.

The end

This experiment ends here. If you have any problems during the experiment, you can leave a message to communicate. The following article will provide another high-availability solution: mysql + drbd + corosync, if you are interested, you can Stay tuned. The above is only for personal study and organization, if there are mistakes or omissions, do not spray~~~

Leave a Comment

Your email address will not be published.