HeartBeat + DRBD + NFS High Available

1. Environment

System: CentOS 6.4×64 Minimal Installation

node1: 192.168.1.13

node2: 192.168 .1.14

vip: 192.168.1.15

< p>nfs: 192.168.1.10

Second, basic configuration

The operation of node1 and node2 is the same

#Close iptables and selinux[root@node1 ~]# getenforceDisabled #Make sure this is correct [root@node1~]#service iptables stop#Configure local hosts resolution[root@node1~]#echo"192.168.1.13 node1">>/etc/hosts[root@node1~]#echo"192.168.1.14 node2"> >/etc/hosts[root@node1 ~]#cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.13 node1192.168.1.14 node2#Configure epel source [root@node1~ ]#Rpm-ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm[root@node1~]#sed-i's@#b@ b@g'/etc/yum.repos.d/epel.repo[root@node1~]#sed -i's@mirrorlist@#mirrorlist@g'/etc/yum.repos.d/epel.repo#Sync Time [root@node1 ~]#yum install ntp -y[root@node1~]#echo"*/10****/usr/sbin/ntpdateasia.pool.ntp.org&>/dev/null"> /var/spool/cron/root[root@node1~]# ntpdate asia.pool.ntp.org21 Jun 17:32:45 ntpdate[1561]: step time server 211.233.40.78 offset -158.552839 sec[root@node1~] #Hwclock-w#Configure ssh mutual trust[root@node1~]# ssh-keygen[root@node1~]#ssh-copy-id-i~/.ssh/id_rsa.pub root@node2

Three, install and configure heartbeat

(1). Install heartbeat

#Perform the installation operation on both ha-node1 and ha-node2 [root@node1 ~]# yum install heartbeat -y

(2). Configure ha.cf

[root @node1~]#cd/usr/share/doc/heartbeat-3.0.4/[[email protected]]#cpauthkeysha.cfharesources/etc/ha.d/[[email protected] .4]#cd/etc/ha.d/[[email protected]]#lsauthkeysha.cfharcharesourcesrc.dREADME.configresource.dshellfuncs[[email protected]]#egr ep -v "^$|^#"/etc/ha.d/ha.cf logfile /var/log/ha-loglogfacility local1keepalive 2deadtime 30warntime 10initdead 120mcast eth0 225.0.10.1 694 1 0auto_failback onnode node1nonode2 (3). Configure authkeys

[[email protected]]#ddif=/dev/randombs= 512 count=1|openssl md50+1 records in0+1 records out21 bytes (21B) copied, 3.1278e-05s, 671 kB/s(stdin)= 4206bd8388c16292bc03710a0c747f59[[email protected]]#grep #/Etc/ha.d/authkeys auth 11 md5 4206bd8388c16292bc03710a0c747f59#Modify the authentication file permissions to 600[root@node1~]#chmod 600/etc/ha.d/authkeys

(4) .Configuration haresource

[[email protected]]#grep-v^#/etc/ha.d/haresourcesnode1IPaddr: :192.168.1.15/24/eth0

(5).Start heartbeat

[root@node1 ha .d]# scp authkeys haresources ha.cf node2:/etc/ha.d/#node1 start service[root@node1~]#/etc/init.d/heartbeat startStarting High-Availability services: INFO: Resource is stoppedDone. [root@node1 ~]# chkconfig heartbeat off#Description: Turn off auto-start when the server restarts. When the server restarts, you need to manually start #node2 Start Service[root@node2~]#/etc/init.d/heartbeat start#View the result [root@node1~]#ip a|grep eth02: eth0:  mtu 150 0 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24 brd 192.168.1.255 scope global secondary eth0 #vip on the master node [root@node2a~]# grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0 no vip

style="font-family:'宋体', SimSun;font-size:14px;">(6). Test heartbeat

normal state

#node1 Information[root@node1~]#ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24 brd 192.168.1.255 scope global secondary eth0 #vip在主node上#node2信息[root@node2~]#ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0 There is no vip on the standby node

simulate the status information after the main node is down

#Stop the heartbeat service on the master node node1[root@node1 ~]# /etc/init.d/heartbeat stopStopping High-Availability services: Done.[root@node1~]#ipa |grep eth0 #After the heartbeat service of the master node is stopped, the VIP resource is robbed 2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 #View resources on standby node node2[root@node2 ~]# ip a |grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24 brd 192.168.1.255 scope global secondary eth0

restore the heartbeat service of the master node

[root@node1 ~]# /etc/init.d/heartbeat startStarting High-Availability services: INFO: Resource is stoppedDone.#Master node After the heartbeat service resumed, I took the resources back [root@node1~]#ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24 brd 192.168.1.255 scope global secondary eth0 #View standby node[root@node2~]#ip a|grep eth0 #vip resource has been removed 2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0

Fourth, install and deploy DRBD

(1). Partition the hard disk, the operation of node1 and node2 is the same< /p>

[root@node1 ~]# fdisk/dev/sdb# Description: /dev/sdb is divided into 2 partitions /dev/sdb1 and /dev/sdb2, /dev/sdb1=19G[root@node1~]#partprobe/dev/sdb#Format the partition[root@node1~]#mkfs.ext4/dev/sdb1 Note: sdb2 partition is a metadata partition, no format is required化操作[root@node1 ~]# tune2fs -c -1 /dev/sdb1 Description: Set the maximum number of mounts to -1, turn off the mandatory check of the number of mounts limit

(2).Install DRBD

Because our system is CentOS6.4, we also need to install the kernel module. The version needs to be consistent with uname -r. The installation package is extracted from the system installation software. The process is omitted. . The installation process of node1 and node2 is the same, here only the installation process of node1 is given

#Install kernel file[root@node1 ~]# rpm -ivh kernel-devel-2.6.32-358.el6.x86_64.rpm kernel-headers-2.6.32-358.el6.x86_64.rpm [root@node1 ~]# yum install drbd84 kmod-drbd84 -y

(3). Configure DRBD

a. Modify the global configuration file

[root@node1 ~]#egrep-v"^$|^#|^[[:space:]]+#"/etc/drbd.d/global_common.confglobal {usage-count no; }common{ protocol C; handlers{} startup{} options{} disk{ on-io-error detach; no-disk-flushes; no-md-flushes; rate200M;} net{ sndbuf-sizebersk; max- 512k 8000; unplug-watermark 1024; max-epoch-size 8000; cram-hmac-alg "sha1"; shared; disconnect-secret after-disconnect-disconnect-b after-disconnect-b after-sb-disconnect-1 ; Rr-conflict disconnect; })

b. Increase resources

[root@node1~]#cat /etc/drbd.d/nfsdata.resresource nfsdata on node1 device/dev/drbd1; device/dev/drbd1; dev/disk/dev/sdb1; 1.13:7789; meta-disk /dev/sdb2 [0]; on node2{device/dev/drbd1; }disk/dev/sdb1; 789db/dev/sdb1; 789db/dev/sdb1; 789:7 }

c. Copy the configuration file to node2, restart the system to load the drbd module, and initialize the meta data

[root@node1~]# scp global_common.conf nfsdata.res node2:/etc/drbd.d/[root@node1~]#depmod[root@node1~]#modprobedrbd[root@node1~]#lsmod| grep drbddrbd 365931 0 libcrc32c 1246 1 drbd#Initialize meta data at node1[root@node1~]#drbdadm create-md nfsdatai nitializing activity log NOT initializing bitmapWriting meta data...New drbd meta data block successfully created.#Load the module on node2 and initialize the meta data[root@node2~]#depmod[root@node2~]#modprobedrbd[root@node2~~ ] # lsmod |. grep drbddrbd 365931 0 libcrc32c 1246 1 drbd [root @ node2 ~] # drbdadm create-md nfsdatainitializing activity logNOT initializing bitmapWriting meta data ... New drbd meta data block successfully created  

d. Start drbd on node1 and node2

#node1 operation

Leave a Comment

Your email address will not be published.