DRBD + HeartBeat + NFS Summary Command Record

DRBD+HEARTBEAT+NFS brief command record

Preliminary preparations: add disks to 2 machines, and connect 1 heartbeat sync line to the network

by fdisk- l It can be seen that there is a device with a size of 10.7G /dev/sdb. Create a logical volume for /dev/sdb:

1. Drbd installation:

  1. Install epel source (The same for 2 hosts)

    wget http://download.Fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

    rpm -ivh epel-release-6-8.noarch.rpm

    rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

    rpm -ivh http://elrepo.org/elrepo-release -6-5.el6.elrepo.noarch.rpm

    yum list

  2. Install DRBD

    yum -y install drbd84 kmod-drbd84

    modprobe drbd #load drbd module Restart the machine if an error is reported

    lsmod| grep drbd

    vim /etc/drbd.conf #View the main configuration file

    Modify the global configuration file (basically unchanged) vi /etc /drbd.d/global_common.conf
    # DRBD is the result of over a decade of development by LINBIT.
    # In case you need professional services for DRBD or have
    # feature requests visit http:// www.linbit.com

    global {
    usage-count no;
    # minor-count dialog-refresh disable-ip-verification
    # cmd-timeout-short 5; cmd-timeout -medium 121; cmd-timeout-long 600;
    }

    common {
    handlers {
    # These are EXAMPLE handlers only.
    # They may have severe implications,
    # Like hard resetting the node under certain circumstances.
    # Be careful when chosing your poison.

    # pri-on-incon-degr “/usr/lib/drbd /notify-pri-on-incon-degr.sh;
    /usr/lib/drbd/notify-emergency-reboot.sh; echo b> /proc/sysrq-trigger; reboot
    -f”;< br> # pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh;
    /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger; reboot
    -f”;
    # local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drb
    d/notify-emergency-shutdown.sh; echo o> /proc/sysrq-trigger; halt -f”;
    # fence-peer “/usr/lib/drbd/crm-fence-peer.sh”;< br> # split-brain “/usr/lib/drbd/notify-split-brain.sh root”;
    # out-of-sync “/usr/lib/drbd/notify-out-of-sync.sh root”;
    # before-resync-target “/usr/lib/drbd/snapshot-resync-target-lvm
    .sh -p 15 – -c 16k”;
    # after-resync- target /usr/lib/drbd/unsnapshot-resync-target-lvm
    .sh;
    }

    startup {
    # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-s
    b
    }

    options {
    # cpu-mask on-no-data-a ccessible
    }

    disk {
    # size on-io-error fencing disk-barrier disk-flushes
    # disk-drain md-flushes resync-rate resync-after al-extents< br> # c-plan-ahead c-delay-target c-fill-target c-max-rate
    # c-min-rate disk-timeout
    net {
    # protocol timeout max-epoch-size max-buffers unplug-watermark
    # connect-int ping-int sndbuf-size rcvbuf-size ko-count
    # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
    # after-sb-1pri after-sb-2pri always-asbp rr-conflict
    # ping-timeout data-integrity-alg tcp-cork on-congestion
    # congestion-fill congestion-extents csums-alg verify-alg
    # use-rle
    cram-hmac-alg “sha1”; #set encryption algorithm sha1
    shared-secret “mydrbdlab”; #set encryption key
    }
    }

  3. Add resources

    cat /etc/drbd.d/web .res
    resource web {
    on fuzai01 {
    device /dev/d rbd0;
    disk /dev/mapper/drbd-dbm;
    address 172.16.100.2:7789;
    meta-disk internal;
    },
    on fuzai02 {dev/
    drbd0;
    disk /dev/mapper/drbd-dbm;
    address 172.16.100.3:7789;
    meta-disk internal;
    }

  4. }

    < li>

    Ensure that the configuration files of the two machines are the same

  5. Initialize resources on node1 and node2

  6. code>mknod /dev/drbd0 b 147 0 Create DRBD

    drbdadm create-md web Execute twice

  7. Set the master node and synchronize datadrbdadm ---Overwrite-data-of-peer primary all

    View status service drbd status

  8. Format DRBD mkfs.ext4 /dev/drbd0 #No operation on slave node

  9. Mount DRBD mkdir/data # Create data directory mount /dev/drbd0 /data

  10. DRBD role switching The main stop drbd service, if you can’t stop, stop heartbeat first or uninstall umount /data

    From< code class="bash comments">drbdadm primary web Set to Primary status mount/dev/drbd0/data/< /code>

Second, HEARTBEAT

  1. install yum-y install heartbeat

  2. < div class="line number1 index0 alt2"> Heartbeat configuration involves the following files:
    /etc/ha .d /ha .cf #Main configuration file
    /etc/ha .d /haresources #resource file
    /etc/ha .d /authkeys #Authentication related
    /etc/ha .d /resource .d < code class="bash plain">/killnfsd #nfs startup script, managed by HeartBeat

  3. cat /etc/ha.d/ha.cf
    debugfile /var/log/ha-debug
    logfile /var/log/ha-log< br>logfacility local0
    keepalive 2
    deadtime 10
    warntime 6
    udpport 694
    ucast eth0 192.168.1.168
    auto_failback off
    node fuzai01
    node fuzai02
    pin g 192.168.1.199
    respawn hacluster /usr/lib64/heartbeat/ipfail

    vi /etc/ha.d/haresources
    fuzai01 IPaddr::192.168.1.160/24/eth0 drbddisk::web Filesystem::/dev/drbd0::/data::ext4 killnfsd
    vi /etc/ha.d/resource.d/killnfsd Don't forget to add permissions chmod 755
    vi /etc/ha.d/authkeys ​​br>auth 1
    1 crc

4. Start the service to view vip

< pre>ip a|grep eth0

Three, NFS installation

1, install nfs

[root@M1 drbd]# yum install nfs-utils rpcbind-y 
[root@M2~]# yum install nfs-utils rpcbind -y

2, configure nfs shared directory

< pre>[root@M1 drbd]# cat /etc/exports /data 192.168.1.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0)
[root@M2~]#cat/etc /exports /data 192.168.0.0/24(rw,sync,no_root_squash,anonuid=0,anongid=0)

3. Start rpcbind and nfs services< /span>

< span class="comment">/etc/init.d/rpcbind start;chkconfig rpcbind on

/etc/init.d/nfs start;chkconfig nfs off (After switching to the standby machine, manually switch back to the host after recovery)

Four, simulation test add monitoring nfs service script method

http://732233048.blog.51cto.com/9323668/1669417

1

2

3

4

5

6

7

8

9

10< /p>

11

12

13

14

15

16

17

18

19

20

21

[root@scj~] #pvcreate/dev/sdb#Create pv

Physical volume "/dev/sdb" successfully created

[root@scj ~] # pvs

< p> PV VG Fmt Attr PSize PFree

/dev/sda2 VolGrouplvm2 a-- 19.51g 0

< p> /dev/sdb lvm2 10.00g 10.00g< /code>

[root@scj~] #vgcreatedrbd/dev/sdb#Create volume group drbd , Add pv to the volume group

Volume group "drbd" successfully created

[root@scj~] #vgs

VG< /code> #PV#LV#SNAttr VSizeVFree

VolGroup 1 2 0 wz--n- 19.51g 0

drbd 1 0 0 wz--n- 10.00g 10.00g

[root@scj~] #lvcreate-ndbm-L9Gdrbd#Create an lvm logical volume in the volume group drbd

/code> Logical volume "dbm" created p>

[root@scj ~] #lvs

LV Attr VG LSize Pool Origin Data% Move Log Cpy%Sync Convert

lv_root VolGroup -wi-ao--- 18.51g 18.51g 18.51g 18.51g 18.51g lv_root VolGroup -wi-ao--- 18.51g 18 lv_swap VolGroup -wi-ao 1.00g 1.00g

class class class plain">dbm drbd -wi-a----9.00g

[root@scj~] #Ls /dev/drbd/dbm #View the created logical volume

/dev/drbd/dbm

Heartbeat configuration involves the following files:

/etc/ha .d /ha .cf #Main configuration file

/etc/ha .d /haresources #resource file

/etc/ha< /code> .d /authkeys #certification related< /p>

/etc/ha .d /resource< /code> .d /killnfsd #nfs startup script, managed by HeartBeat

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 226 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
< div class="line number1 index0 alt2"> [root@scj~] #pvcreate/dev/sdb#Create pv
Physical volume "/dev/sdb" successfully created
[root@scj ~] # pvs < /div>

PV VG Fmt Attr PSize PFree
/dev/sda2 VolGrouplvm2a-- 19.51g 0
/ dev/sdb lvm2 a-- 10.00g 10.00g
[root@scj~] #vgcreatedrbd/dev/sdb#Create volume group drbd, add pv to volume group

< div class="line number8 index7 alt1"> Volume group "drbd" successfully created

[root@scj~] #vgs
VG #PV#LV#SNAttr VSize VFree
VolGroup 1 2 0 wz--n 19.51g 0
/code> drbd 0 0 wz--n- 10.00g 10.00g
[ root@scj~] # lvcreate -n dbm -L 9G drbd #Create lvm logical volume in volume group drbd
Logical volume "dbm" created
[root@scj ~] #lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao--- 18.51g
lv_swap VolGroup -wi-ao--- 1.00g
dbm drbd -wi-a----9.00g
[root@scj ~] #ls/dev/drbd/dbm #View the created logical volume div>

/dev/drbd/dbm