HeartBeat + DRBD + MYSQ + NFS Deployment Document

System environment

System: CentOS6.6

Number of system bits x86_64

Software environment

heartbeat-3.0.4-2

drbd-8.4.3

nfs-utils-1.2.3-26

Deployment environment

Role IP

VIP 192.168.1.13 (the address of the intranet to provide services)

data-09.com br0:192.168.1.9

data-11.com br0:192.168.1.11

1, DRBD chapter

Note: DRBD can use hard disk , Partitions, logical volumes, but cannot create a file system

1), install dependent packages

[[email protected] ~]# yum install gcc gcc- c++ make glibc flex kernel-devel kernel-headers

2), install drbd

[[email protected] ~]#wget http://oss.linbit.com /drbd/8.4/drbd-8.4.3.tar.gz

[[email protected] ~]#tar zxvf drbd-8.4.3.tar.gz

[[email protected] ~]#cd drbd-8.4.3

[root@d ata-09.com ~]#./configure –prefix=/usr/local/tdoa/drbd –with-km

[[email protected] ~]#make KDIR=/ usr/src/kernels/2.6.32-279.el6.x86_64/

[[email protected] ~]#make install

[root@data-09. com ~]#mkdir -p /usr/local/tdoa/drbd/var/run/drbd

[[email protected] ~]#cp /usr/local/tdoa/drbd/etc /rc.d/init.d/drbd /etc/rc.d/init.d

[[email protected] ~]#Load the DRBD module:

[ [email protected] ~]# modprobe drbd

3), configure DRBD

The configuration files at both ends of the active and standby nodes are exactly the same

[root@ data-09.com ~]#cat /usr/local/drbd/etc/drbd.conf

resource r0{

protocol C;

startup { wfc-timeout 0; degr-wfc-timeout 120;}

disk {on-io-error detach;}

net{

timeout 60;< /p>

connect-int 10;

ping-int 10;

max-buffers 2048;

max-epoch-size 2048;< /p>

}

syncer {rate 100M;}

on data-09.com{

device /dev/drbd0;

disk /dev/data/data_lv;

address 192.168.1.9:7788;

meta-disk internal;

}

on data-11.com{

device /dev/drbd0;

disk /dev/data/data_lv;

address 192.168.1.11:7788;

p>

meta-disk internal;

}

}

4), initialize the r0 resource of drbd and start it

Operations to be done on both nodes

[[email protected] ~]# drbdadm create-md r0

[[email protected] ~]# drbdadm up r0

The status of data-09.com and data-11.com should be similar to the following:

[[email protected] ~]# cat /proc /drbd

version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2014-02 -26 07:26:07

0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r—–

ns:0 nr:0 dw :0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

5 ), promote data-09.com as the primary node and set it to start

[[email protected] ~]# drbdadm primary –force r0

View data-09 The status of .com should be similar to the following:

[[email protected] ~]# cat /proc/drbd

version: 8.4.3 (api:1/proto :86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2014 -02-26 07:28:26

0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—–

ns:4 nr: 0 dw:4 dr:681 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Note: DRBD service needs to start automatically after booting

2. NFS chapter

yum install nfs -utils portmap -y Install NFS service

vim /etc/exports

/usr/local/tdoa/data/attach 192.168.100.0/24(rw,no_root_squash)

/usr/local/tdoa/data/attachment 192.168.100.0/24 (rw,no_root_squash)

service rpcbind restart

service nfs restart

chkconfig rpcbind on

chkconfig nfs off

service nfs stop

Test that NFS can be mounted by the front-end web server and stop the NFS service after writing.

3.Mysql article

1. Create a highly available directory /usr/local/data

The data5 directory is used for database files

2. Heartbeat mainly modifies the Mysql database storage directory to /usr/local/data/data5

p>

3. After the Mysql installation on the main heartbeat and standby heartbeat servers is completed, switch the DRBD partition to the standby machine, and check whether the Mysql of the standby machine is working properly.

Downgrade the host to a standby machine

[[email protected] /]# drbdadm secondary r0

[[email protected] / ]# cat /proc/drbd

On the backup machine data-11.com, upgrade it to “host”.

[[email protected]/]# drbdadm primary r0

< /p>

4, heartbeat chapter

(1.1), YUM install heartbeat

[[email protected] ~]#wget http://mirrors.sohu .com/fedora-epel/6Server/x86_64/epel-release-6-8.noarch.rpm

[[email protected] ~]# rpm -ivh epel-release-6-8 .noarch.rpm

[[email protected] ~]# yum install heartbeat -y

(1.2), RPM install heartbeat

1.yum install “liblrm.so.2()(64bit)”

2.rpm -ivh PyXML-0.8.4-19.el6.x86_64.rpm

p>

3.rpm -ivh perl-TimeDate-1.16-13.el6.noarch.rpm

4.rpm -ivh resource-agents-3.9.5-12.el6_6.1.x86_64 .rpm

5.rpm -ivh cluster-glue-1 .0.5-6.el6.x86_64.rpm

6.rpm -ivh cluster-glue-libs-1.0.5-6.el6.x86_64.rpm

7.rpm- ivh heartbeat-libs-3.0.4-2.el6.x86_64.rpm heartbeat-3.0.4-2.el6.x86_64.rpm

Remarks: heartbeat-libs and heartbeat must be installed together

(2), configure heartbeat

The configuration files (ha.cf authkeys haresources) at both ends of the active and standby nodes are exactly the same

cp /usr/ share/doc/heartbeat-3.0.4/ha.cf /etc/ha.d/

cp /usr/share/doc/heartbeat-3.0.4/haresources /etc/ha.d/< /p>

cp /usr/share/doc/heartbeat-3.0.4/authkeys /etc/ha.d/

vim /etc/ha .d/ha.cf

########### #################################

logfile /var/log/ha-log #Log directory

logfacility local0 #log level

keepalive 2 #heartbeat detection interval

deadtime 5 eth3 7 #dead time

.2.33 #Heartbeat network card and the other party’s IP (only modify the standby machine here)

auto_failback off #After the main server is normal, the resources are transferred to the main

node oa-mysql.com oa -haproxy.com #Hostnames of the two nodes

## ############################################## #########################

vim /etc/ha.d/authkeys #Heartbeat password file permissions must be 600

#####################

auth 3 #Select algorithm 3, MD5 algorithm

#1 crc

#2 sha1 HI!

3 md5 heartbeat

############## #######

vim /etc/ha.d/

##################### ############################################## ##

data-09.com IPaddr::192.168.1.13/24/br0 drbddisk::r0 Filesystem::/dev/drbd0::/usr/local/data::ext4 mysql nfs

Note: host name VIP of the main server/bound network card drbd partition: drbd partition mount directory: file system mysql service NFS service

(5) Create a drbddisk nfs mysql script and grant execution permissions (three resource management scripts need to be stored in ha.d/resource.d)

[[email protected] ~]#cat / etc/ha.d/resource.d/drbddisk

############################## #################################

#!/bin/bash

#

# This script is inteded to be used as resource script by heartbeat

#

# Copright 2003-2008 LINBIT Information Technologies

p>

# Philipp Reisner, Lars Ellenberg

#

###

DEFAULTFILE=”/etc/default/drbd”

DRBDADM=”/sbin/drbdadm”

if [-f $DEFAULTFILE ]; then

. $DEFAULTFILE

fi

if [“$#” -eq 2 ]; then

RES=”$1″

CMD=”$2″

else

RES=”all”

CMD=”$1″

fi

## EXIT CODES

# since this is a “legacy heartbeat R1 resource agent” script,< /p>

# exit codes actually do not matter that much as long as we conform to

# http://wiki.linux-ha.org/HeartbeatResourceAgent

# but it does not hurt to conform to lsb init-script exit codes,

# where we can.

# http://refspecs.linux-foundation.org/LSB_3.1.0 /

#LSB-Core-generic/LSB-Core-generic/iniscrptact.html

####

drbd_set_role_from_proc_drbd()

< p>{

local out

if! test -e /proc/drbd; then

ROLE=”Unconfigured”

return

fi

dev=$( $DRBDADM sh-dev $RES )

minor=${dev#/dev/drbd}

< p>if [[ $minor = *[!0-9]* ]]; then

# sh-minor is only supported since drbd 8.3.1

minor=$( $DRBDADM sh-minor $RES )

fi

if [[- z $minor ]] || [[ $minor = *[!0-9]* ]]; then

ROLE=Unknown

return

fi

if out=$(sed -ne “/^ *$minor: cs:/ {s/:/ /g; p; q; }” /proc/drbd); then

set – $out

ROLE=${5%/**}

: ${ROLE:=Unconfigured} # if it does not show up

else

ROLE=Unknown

fi

}

case “$CMD” in

Start)

# try several times, in case heartbeat deadtime

# was smaller than drbd ping time

try=6

while true; do

$DRBDADM primary $RES && break

let “–try” || exit 1 # LSB generic error

sleep 1

p>

done

;;

stop)

# heartbeat (haresources mode) will retry failed stop

# for a number of times in addition to this internal retry.

try=3

while true; do

$DRBDADM secondary $RES && break

# We used to lie here, and pretend success for anyth ing != 11,

# to avoid the reboot on failed stop recovery for “simple

# config errors” and such. But that is incorrect.

# Don’t lie to your cluster manager.

# And don’t do config errors…

let –try || exit 1 # LSB generic error

sleep 1

done

;;

status)

if [“$RES” = “all”] ; then

echo “A resource name is required for status inquiries.”

exit 10

fi

ST=$( $ DRBDADM role $RES )

ROLE=${ST%/**}

case $ROLE in

Primary|Secondary|Unconfigured)

# expected

;;

*)

# unexpected. whatever…

# If we are unsure about the state of a resource, we need to

# report it as possibly running, so heartbeat can, after failed

# stop, do a recovery by reboot.

< p># drbdsetup may fail for obscure reasons, eg if /var/lock/ is

# suddenly readonly. So we retry by parsing /proc/drbd.

drbd_set_role_from_proc_drbd

esac

case $ROLE in

Primary)

echo “running (Primary)”

exit 0 # LSB status “service is OK”

;;

Secondary|Unconfigured)

echo “stopped ($ROLE)”

exit 3 # LSB status “service is not running”

;;

*)

# NOTE the “running” in below message.

# this is a “heartbeat” resource script,

# the exit code is _ignored_.

echo “cannot determine status, may be running ($ROLE)”

exit 4 # LSB status ” service status is unknown”

;;

esac

;;

*)

echo “Usage : drbddisk [resource] {start|stop|status}”

exit 1

;;

esac

exit 0

########################################## #################

[[email protected] ~]#cat /etc/ha.d/resrouce.d/nfs

killall -9 nfsd; /etc/init.d/nfs restart;exit 0

The mysql startup script can use the startup management script that comes with mysql

cp /etc/init.d/mysql /etc/ha.d/resrouce.d/

Note: The three scripts of nfs mysql drbddisk require +x permissions

(6), start heartb eat

[[email protected] ~]# service heartbeat start (two (Both nodes are started at the same time)

[[email protected] ~]# chkconfig heartbeat off

Description: Turn off auto-start at boot. When the server restarts, you need to start it manually.< /p>

5. Test

On another LINUX client, mount the virtual IP: 192.168.7.90. The successful mount indicates that NFS+DRBD+HeartBeat is done.

< p>

Test the availability of DRBD+HeartBeat+NFS:

1. Transfer files to the mounted /tmp directory, and suddenly restart the master DRBD server to check the changes Realize breakpoint resuming, but it takes time to switch drbd+heartbeat normally

2. Assuming that the eth0 of the primary is given to ifdown at this time, and then the primary is promoted directly on the secondary, and it is also mounted. It is found that the files copied in the test on the primary are indeed synchronized. After restoring the primary eth0, it was found that the master-slave relationship was not automatically restored. After supporting inquiries, it was found that the drbd detection appeared Split-Brain. Both nodes are standalone. The fault description is as follows: Split-Brain detected , dropping connection! This instant legendary brain split, DRBD officially recommends manual recovery (the chance of this happening in the production environment is very low, who will go to the fault and touch the server in production)


The following manual restoration of Split-Brain status:

1. drbdadm secondary r0

2. drbdadm disconnect all

3 . drbdadmin – –discard-my-data connect r0

ii. In primary Top:

1. drbdadm disconnect all

2.< span class="Apple-tab-span" style="white-space:pre;"> drbdadm connect r0

3. Assuming the primary factor The hardware is damaged. You need to generate Secondary into a Primay host. How to deal with it is as follows:

On the primary host, first uninstall the DRBD device.

umount /tmp

Degrade the host to a standby machine

[[email protected] /]# drbdadm secondary r0

[root @data-09.com /]# cat /proc/drbd

1: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r―

Now, both hosts are “standby”.

On the backup machine data-11.com, upgrade it to “Host”.

span>[[email protected]/]# drbdadm primary r0

[ [email protected] /]# cat /proc/drbd

1: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r–

Known issues:

Heartbeat cannot monitor resources, and That is to say, when drbd or nfs hangs, no action will take place. It only thinks that the other party’s machine will be dead. That is, when the machine is down and the network is disconnected, the main/standby switchover will occur. Therefore, there is another solution: corosync+pacemaker

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 2553 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.