HeartBeat + DRBD + MFS High Available

1. Environment

System CentOS 6.4×64 minimal installation

mfs-master 192.168.3.33

mfs-slave 192.168.3.34

vip 192.168.3.35

< p>mfs-chun 192.168.3.36

mfs-client 192.168.3.37

Second, basic configuration

#Close iptables, configure hosts local resolution [root@mfs-master~]# service iptables stop[root@mfs-master~]# vim /etc/hosts[root@mfs-master~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.3.33 mfs-master192.168.3.34 mfs-slave192.168.3.36 mfs-chun192.168.3.37 mfs-client#Install yum source[root@mfs-master~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpmRetrieving http://dl.fedoraproject.org/pub/epel/6/x86_64/epel -release-6-8.noarch.rpmwarning: /var/tmp/rpm-tmp.sh3hsI: ​​Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEYPreparing...############# ############################ [100%] 1:epel-release ########### ############################## [100%][root@mfs-master~]# sed-i's @#b@b@g'/etc/yum.repos.d/epel.repo[root@mfs-master~]#sed -i's@mirrorlist@#mirrorlist@g'/etc/yum.repos.d /epel.repo[root@mfs-master ~]# rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm#Configure ntp synchronization time [root@mfs-master ~]#echo"*/10*****/usr/sbin/ntpdateasia.pool.ntp.org&>/dev/null">/var/spool/cron/root#Configure ssh mutual trust (here only mfs is required -master and mfs-slave trust each other)#mfs -master operation[root@mfs-master~]# ssh-keygen[root@mfs-master~]# ssh-copy-id -i~/.ssh/id_rsa.pub root@mfs-slave#mfs-slave operation[ root@mfs-slave~]# ssh-keygen[root@mfs-slave~]# ssh-copy-id-i~/.ssh/id_rsa.pub root@mfs-master

Three, install and configure heartbeat

(1). In mfs-master and mfs-slave Perform the same installation operation

[root@mfs-master~]#yuminstall heartbeat-y

(2). Configure ha.cf

[root@mfs-master~]#egrep-v"^$|^# "/Etc/ha.d/ha.cf logfile /var/log/ha-loglogfacility local1keepalive 2deadtime 30warntime 10initdead 120mcast eth0 225.0.10.1 694 1 0auto_failback onnode mfs-masternode mfs-slav styleecrm no

=”font-family:’Song Ti’, SimSun;font-size:14px;”>(3). Configure authkeys

[root@mfs-master~]# dd if=/dev/random bs=512 count=1|openssl md50+1 records in0+1 records out21 bytes (21B) copied, 5.0391e-05s, 417kB/ s(stdin)=c55529482f1c76dd8967ba41f5441ae1[root@mfs-master~]#grep-v^#/etc/ha.d/authkeysauth11 md5c55529482f1c76dd8967ba41f5441ae1[root@mfs-master/#etc/ authkeys

(4). Configure haresource

[root@mfs-master ~]# grep -v ^# /etc/ha.d/haresources mfs-master IPaddr::192.168.3.35/24/eth0 #Configure only one IP resource for debugging temporarily

(5).Start heartbeat

#Description, you should turn off auto-start during work, and manually start it manually when the server restarts [root@mfs-master~]#chkconfig heartbeat off#Start service[root@mfs-master ha.d]# /etc/init.d/heartbeat startStarting High-Availability services: INFO: Resource is stoppedDone.#Copy the configuration file to mfs-slave [root@mfs- master ha.d]# scp authkeys ha.cf haresources mfs-slave:/etc/ha.d#Start the service on mfs-slave[root@mfs-slave~]#/etc/init.d/heartbeat startStarting High-Availability services: INFO: Resource is stoppedDone.#View results[root@mfs-master ha.d]# ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlent 1000 192.168. 3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0#Query the ip information on the standby node, and prepare a few points without ip [root @mfs-master ha.d]# ssh mfs-slave ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast global state UP qlen 1000 inet 192.168.3.34/24 brd eth0

(6).Test heartbeat

normal state

#mfs-master的IP Information[[email protected]]#ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 br0 192.168.3.255 scope m eth-second的IP information[root@mfs-slave~]#ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet scope 192.168.3.34/24 brd 192.168.3.255 

simulate the status information after the master node is down

#Stop the heartbeat service on the master node mfs-master [root @mfs-master ha.d]# /etc/init.d/heartbeat stopStopping High-Availability services: Done.[root@mfs-master ha.d]# ip a|grep eth0 After the heartbeat service of the master node stops, vip Resource robbed 2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 View resources on the standby node mfs--slave [root @mfs-slave ~]# ip a |grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.34/24 brd 192.168.3.255 scope global eth0 192.168.3.255 inet 192.168.3.35/24 eth scope 

Restore the heartbeat service of the master node

[root@mfs-master~]# /etc/init.d/ heartbeat startStarting High-Availability services: INFO: Resource is stoppedDone.#After the heartbeat service of the master node is restored, the resource is taken back [root@mfs-master]#ipa|grep eth02:eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope~ global eth0 #slave [root@mfs-secondary]# grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.34/24 brd 192.168.3.255 scop e global eth0

Four, install and deploy DRBD

(1). To partition the hard disk, the operation of mfs-master and mfs-slave is the same

[root@mfs-master ~]# fdisk/dev/sdb# Description: /dev/sdb is divided into 2 partitions /dev/sdb1 and /dev/sdb2, /dev/sdb1=15G[root@mfs-master~]#partprobe/dev/sdb#Format the partition[root@mfs-master~]# mkfs.ext4/dev/sdb1 Description: sdb2 partition is metadata partition , No formatting operation required [root@mfs-master ~]# tune2fs -c -1 /dev/sdb1 Description: Set the maximum number of mounts to -1, turn off the limit on the number of mounts for mandatory checking

< span style="color:rgb(85,85,85);font-family:'Song Ti', SimSun;font-size:14px;line-height:28px;background-color:rgb(255,255,255);">2) .Install DRBD

Because our system is CentOS6.4, we also need to install the kernel module, the version needs to be consistent with uname -r, install We extract the package from the system installation software, the process is omitted. The installation process of mfs-master and mfs-slave is the same, here only the installation process of mfs-master is given

#Install system kernel files[ root@mfs-master ~]# rpm -ivh kernel-devel-2.6.32-358.el6.x86_64.rpm kernel-headers-2.6.32-358.el6.x86_64.rpm [root@mfs-master~]# yum install drbd84 kmod-drbd84 -y

(3). Configure DRBD

a. Modify the global configuration file

[root@mfs-master~]#egrep-v"^$|^#|^[[:space:]]+#"/etc/drbd.d/global_common.confglobal{ usage -count no;}common{ protocol C; handlers{} startup{} options{} disk{ on-io-error detach; detach;} common {protocol C; 512 net rateMd-flushes; no-disk-flushes; sndb-size-flushes; 200 ; Max-buffers 8000; unplug-watermark 1024; 8000; max-epoch-size 8000; cram-hmac-alg "sha1"; after-connect-disconnect-disconnect- after-b-share-secret-2014 sb-2pridisconnect; rr-conflictdisconnect; })

b. Increase resources

[root@mfs-master ~]# cat /etc/drbd.d/mfsdata.resresource mfsdata{ on mfs-maste r Device /dev/drbd1; disk /dev/sdb1; address 192.168.3.33:7789; meta-disk /dev/sdb2; d; mfs/dev/dev/ db/dev/disk1; db/dev-on address 192.168.3.34:7789; meta-disk /dev/sdb2 [0]; }}

c. Copy the configuration file to ha-node2, restart the system to load the drbd module, and initialize the meta data

[root@mfs-master drbd.d]# scp global_common.conf mfsdata.res mfs-slave:/etc/drbd.d/[root @mfs-master~]#depmod[root@mfs-master~]#modprobedrbd#I was installed using a virtual machine, and it can be loaded into the drbd module after restarting. I don’t know why this is [root@mfs-master~] #Lsmod|grep drbddrbdcreatemetadataNewmetabcessdmetacessymetadataizing... .#Load the module on mfs-slave and initialize the meta data [root@m fs-slave~]#depmod[root@mfs-slave~]#modprobedrbd[root@mfs-slave~]#lsmod|grep drbddrbd#depmod[root@mfs-slave~]#modprobedrbd~]#lsmod|grep drbddrbd#365931 0libcrc32c_root rb mfs-dsl mfsdata initializing activity logNOT initializing bitmapWriting meta data...New drbd meta data block successfully created.

d. Start drbd on mfs-master and mfs-slave

#mfs-master operation[root@mfs-master~]#/etc/init.d/drbd start#mfs-slave operation[root@mfs-slave~]#/etc/init .d/drbd start[root@mfs-master~]# drbd-overview 1:mfsdata/0 Connected Secondary/Secondary Inconsistent/Inconsistent #Set mfs-master as the master node[root@mfs-master~]# drbdadm-- --overwrite-data-of-peer primary mfsdata[root@mfs-master~]#drbd-overview 1:mfsdata/0 SyncSource Primary/Secondary UpToDate/Inconsistent [>............. .......] sync'ed: 1.4% (15160/15364)M #Set DRBD to Mount the device to the /data directory and write the test data mfs-master.txt[root@mfs-master~]#mount/dev/drbd1/data[root@mfs-master~]#touch/data/mfs-master .txt[root@mfs-master~]#ls/data/lost+found mfs-master.txt#The status result shows UpToDate/UpToDate indicates that the data of the active and standby nodes has been synchronized [root@mfs-master~]#drbd-overview 1 :mfsdata/0 Connected Primary/Secondary UpToDate/UpToDate/data ext4 15G 38M 14G 1%

e. Test DRBD

normal state

< pre class="brush:bash;toolbar:false">[root@mfs-master~]# drbd-overview 1:mfsdata/0 Connected Primary/Secondary UpToDate/UpToDate/data ext4 15G 38M 14G 1%#Note: Shown here Mfs-master is the master node, mfs-slave is the slave node

simulate the state after the downtime

[root@mfs-master~]#umount/data#Set mfs-master to secondary state[root@mfs-master~]#drbdadm secondary mfsdata[root@mfs-master~] #Drbd-overview 1:mfsdata/0 Connected Secondary/Secondary UpToDate/UpToDate #Set mfs-slave to primary state[root@mfs-slave~]#drbdadm primary mfsdata[root@mfs-slave~]# drbd-overview :mfsdata/0 Connected Primary/Secondary UpToDate/UpToDate [root@mfs-slave~]#mount/dev/drbd1/mnt#Check the file, the test result is normal [root@mfs-slave~]#ls/mntlost+found mfs- master.txt#Note: After the DRBD master node is down, the standby node can be used normally after the standby node is set to the primary state, and the data is consistent #Restore the DRBD state to the original state

Five. Install MFS on mfs-master

Because we want to use drbd’s high availability, all our drbd devices should be mounted under the /usr/local/mfs directory

[root@mfs-master~]#mkdir/usr/local/mfs[root@mfs-master~]#m ount /dev/drbd1 /usr/local/mfs/[root@mfs-master~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/sda3 0 24%/dev/sda3 0 18G 1.4Gmp 16% /shm/dev/sda1 190M 48M 132M 27% /boot/dev/drbd1 15G 38M 14G 1% /usr/local/mfs

MFS installation part mfs-master

#Start to install MFS[root@mfs-master~]# yum install zlib-devel-y[root@mfs-master~]#groupadd-g1000mfs[root@mfs-master~]#useradd-u1000-gmfs-s/sbin/nologinmfs[root@mfs-master ~]#wget http://moosefs.org/tl_files/mfscode/mfs-1.6.27-5.tar.gz[root@mfs-master~]# tar xf mfs-1.6.27-5.tar.gz[ root@mfs-master~]# cd mfs-1.6.27[root@mfs-master mfs-1.6.27]#./configure--prefix=/usr/local/mfs--with-default-user=mfs- -with-default-group=mfs--disable-mfschunkserver--disable-mfsmount[[email protected]]#make&&makeinstall#Configure mfs-master configuration file[roo t@mfs-master mfs]# cp mfsexports.cfg.dist mfsexports.cfg[root@mfs-master mfs]#cp mfsmaster.cfg.dist mfsmaster.cfg[root@mfs-master mfs]#egrep-v"^# |^$"/usr/local/mfs/etc/mfs/mfsexports.cfg* / rw,alldirs,mapall=mfs:mfs,password=centos*. Rw[root@mfs-master mfs]#cp/usr/local /mfs/var/mfs/metadata.mfs.empty /usr/local/mfs/var/mfs/metadata.mfs#Start mfs[root@mfs-master mfs]# /usr/local/mfs/sbin/mfsmaster startworking directory : /Usr/local/mfs/var/mfslockfile created and locked initializing mfsmaster modules...loading sessions...file not foundif it is not fresh installation then you have to restart all active mounts!!!exports file load has been stopology configuration (/usr/local/mfs/etc/mfstopology.cfg) not found-using defaultsloading metadata...create new empty filesystemmetadata file has been loadednochartsdatafile-initializingemptychartsmaster<->metaloggers on module*listmaster19: -> chunkservers module: listen on *: 9420 main master server module: liste n on*:9421mfsmaster daemoninitializedproperly#View result[root@mfs-mastermfs]#ps aux|grep mfs|grep -v greproot 0.0 0 0 0 m 0 0 m 0 0 134 0 m 0 0 0 0 m 0 0 0 0 0 0 0 0? S 09:53 0:00 [drbd_r_mfsdata] root 1349 0.0 0.0 0 0? S 09:53 0:00 [drbd_a_mfsdata] mfs 6990 0.3 18.8 108628 92996? S <10:15 0:00 / usr / local / mfs / sbin / mfsmaster start [root @ mfsmaster mfs] # netstat -tunlp | grep mfstcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 6990 / mfsmaster tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 6990 / mfsmaster tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 6990 / mfsmaster # stop mfs [root @ mfsmaster mfs] # / usr / local / mfs / sbin / mfsmaster stopsending SIGTERM to lock owner (pid :6990)waiting for termination...terminated#Finally configure environment variables[root@mfs-mastermfs]#echo'#add moosefs to the path variable' >> /etc/profile[root@mfs-master mfs]# echo'PATH=/usr/local/mfs/sbin/:$PATH'>>/etc/profile[root@mfs-master mfs]#tail-2/etc/profile#add moosefs to the path variablePATH=/usr/local/mfs/sbin/:$PATH[root@mfs-mastermfs]#source/etc/profile#Start mfs replication for heartbeat File[root@mfs-master mfs]# cp /usr/local/mfs/sbin/mfsmaster/etc/init.d/#Final operation, unmount the file system [root@mfs-master~]# drbd-overview 1:mfsdata /0 Connected Primary/Secondary UpToDate/UpToDate /usr/local/mfs ext4 15G 41M 14G 1% [root@mfs-master~]#umount/usr/local/mfs/#Because we want to start mfs through heartbeat, so DRBD should be handed over to heartbeat for mounting

VI. mfs-slave configuration p>

Because mfs-slave uses drbd, we only need some simple operations to start mfs了

#Let the DRBD device offline[root@mfs-slave~]#drbdadmdown mfsdata#Mount the /dev/sdb1 partition Go to the /usr/local/mfs directory[root@mfs-slave~]#mkdir-p/usr/local/mfs[root@mfs-slave~]#mount/dev/sdb1/u sr / local / mfs [root @ mfs-slave ~] # df -hFilesystem Size Used Avail Use% Mounted on / dev / sda3 18G 1.6G 16G 10% / tmpfs 242M 0 242M 0% / dev / shm / dev / sda1 190M 48M 132M 27% /boot/dev/sdb1 15G 41M 14G 1% /usr/local/mfs#Install basic software[root@mfs-slave~]#yum install zlib-devel-y#m create users and groups[root@[email protected] -slave~]#groupadd-g1000mfs[root@mfs-slave~]#useradd-u1000-gmfs-s/sbin/nologinmfs#Copy the startup file to /etc/init.d[root@mfs-slave ~]# cp /usr/local/mfs/sbin/mfsmaster /etc/init.d/#Test startup script[root@mfs-slave~]# /etc/init.d/mfsmaster startworking directory: /usr/local/ mfs/var/mfslockfile created and locked initializing mfsmaster modules...loading sessions...oksessions file has been loaded exports file has been loaded mfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg-default.data not found. ..loading objects (files,directories,etc.)...okloading names...okloading deletion timestamps...okloading chunks data... okchecking filesystem consistency... okconnecting files and chunks... okall inodes: 1directory inodes: 1file inodes: 0chunks: 0metadata file has been listed loadedstats <-> hass logger been loadedmaster<-> -> Chunkservers module: listen on*: 9420main master server module: listen on*: 9421mfsmaster daemon initialized properly[root@mfs-slave]# ps aux |grep mfsmastermfs: 00 28 0 08 2409 628 0.593 18.8 /etc/init.d/mfsmaster startroot 2411 0.0 0.1 103248 876 pts/0 S+ 10:28 0:00 grep mfsmaster[root@mfs-slave~]#netsat-tunat|grepnot mfs-bash: netsat root @ mfs-slave ~] # netstat -tunlp | grep mfstcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 2409 / mfsmaster tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 2409 / mfsmaster tcp 0 0 0.0.0.0: 9421 0.0.0.0: L ISTEN 2409/mfsmaster#Close the service[root@mfs-slave~]# /etc/init.d/mfsmaster stopping SIGTERM to lock owner (pid: 2409) waiting for termination...terminated# Configure environment variables [root@mfs- slave~]#echo'# add moosefs to the path variable' >>/etc/profile [root@mfs-slave~]# echo'PATH=/usr/local/mfs/sbin/:$PATH'>>/etc /profile[root@mfs-slave~]#tail-2/etc/profile#add moosefs to the path variablePATH=/usr/local/mfs/sbin/:$PATH[root@mfs-slave~]#source/etc /profile#Finally uninstall the file system and let DRBD go online[root@mfs-slave~]#df -hFilesystem M Size Used Avail Use% Mounted on/dev/sda3 18G 0 242/dev 10% 16G 0 /shm/dev/sda1 190M 48M 132M 27% /boot/dev/sdb1 15G 41M 14G 1% /usr/local/mfs[root@mfs-slave-local/fs-root/~] slave~]# drbdadm up mfsdata[root@mfs-slave~]# drbd-overview 1:mfsdata/0 Connected Secondary/Primary UpToDate/UpToDate

Seven, configure heartbeat management related services

#In the previous section, we have configured heartbeat to manage VIP resources. Now we configure and manage DRBD resources#Modify the configuration file /etc/ha.d/haresources[root@mfs-master~]#egrep -v"^#|^$" /etc/ha.d/haresources mfs-master IPaddr: :192.168.3.35/24/eth0 drbddisk::mfsdata Filesystem::/dev/drbd1::/usr/local/mfs::ext4[root@mfs-master~]#scp/etc/ha.d/haresourcesmfs- slave:/etc/ha.d/haresources#Restart the heartbeat service of mfs-mastar and mfs-slave [root@mfs-master~]# ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0[root@mfs-master Useva~]#df -hFilesystem Size% Mounted on / dev / sda3 18G 1.6G 16G 10% / tmpfs 242M 0 242M 0% / dev / shm / dev / sda1 190M 48M 132M 27% / boot / dev / drbd1 15G 41M 14G 1% / usr / local / mfs # View the situation of mfs-slave[root@mfs-slave~]#ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 1 92.168.3.34/24 brd 192.168.3.255 scope global eth0[root@mfs-slave~]#df -hFilesystem Size Used Avail Use% Mounted on/dev/sda3 24G 16 mp 242/ 242 10% 18G 1.6 mp /shm/dev/sda1 190M 48M 132M 27% /boot#Stop the heartbeat service of mfs-master and can take over the service normally [root@mfs-slave~]#ipa|grep eth02:eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.34/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global eth0[root@mfs-Filesystem size~]# Used Avail Use% Mounted on / dev / sda3 18G 1.6G 16G 10% / tmpfs 242M 0 242M 0% / dev / shm / dev / sda1 190M 48M 132M 27% / boot / dev / drbd1 15G 41M 14G 1% / usr / local/mfs# Then turn on the heartbeat service of mfs-master[root@mfs-master~]# /etc/init.d/heartbeat startStarting High-Availability services: INFO: Resource is stoppedDone.#Can take over the meeting resources normally [roo t@mfs-master~]#ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 192.168.3.33/24 brd 192.168.3.255 192.168.3.3 /24 brd 192.168.3.255 scope global secondary eth0[root@mfs-master~]#df -hFilesystem Size Used Avail Use% Mounted on/dev/sda3 0242/dev/sda3 0 18G/G 1.6G 16% /dev/sda1 190M 48M 132M 27% /boot/dev/drbd1 15G 41M 14G 1% /usr/local/mfs#Finally give the mfs service to heartbeat management[-root@mfs#v~mfs] |^#"/etc/ha.d/haresources mfs-master IPaddr::192.168.3.35/24/eth0 drbddisk::mfsdata Filesystem::/dev/drbd1::/usr/local/mfs::ext4 mfsmaster[root @mfs-master ~]# scp /etc/ha.d/haresources mfs-slave:/etc/ha.d/haresources #Restart the heartbear service of mfs-master [root@mfs-master~]#ip a| grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0[root@mfs-master~]#df -hFilesystem size Used 10 242/dev/sda3 G 16% Mounted on/dev/sda3 242M 0% /dev/shm/dev/sda1 190M 48M 132M 27% /boot/dev/drbd1 15G 0% /usr/local/sda1 fs rep_local/sda1 0% 1% /usr/local/ fs #0 m-stat 0 m-stat 0 m-stat *:9419                      *:*                         LISTEN      12888/mfsmaster     tcp        0      0 *:9420                      *:*                         LISTEN      12888/mfsmaster     tcp        0      0 *:9421                      *:*                         LISTEN      12888/mfsmaster  #停止mfs-master的heartbeat服务[root@mfs-master ~]# /etc/init.d/heartbeat stopStopping High-Availability services: Done.#查看mfs-slave的信息[root@mfs-slave ~]# ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    inet 192.16 8.3.34/24 brd 192.168.3.255 scope global eth0    inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0[root@mfs-slave ~]# df -hFilesystem            Size  Used Avail Use% Mounted on/dev/sda3              18G  1.6G   16G  10% /tmpfs                 242M     0  242M   0% /dev/shm/dev/sda1             190M   48M  132M  27% /boot/dev/drbd1             15G   41M   14G   1% /usr/local/mfs[root@mfs-slave ~]# netstat -tunlp|grep mfstcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      8664/mfsmaster      tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      8664/mfsmaster      tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      8664/mfsmaster#再重新启动mfs-master的heartbeat服务[root@mfs-master ~]# /etc/init.d/heartbeat startStarting High-Availability services: INFO:  Resource is stoppedDone.[root@mfs-master ~]# ip a|grep eth02: eth0:  mtu 1500 qdisc pfif o_fast state UP qlen 1000    inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0    inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0[root@mfs-master ~]# df -hFilesystem            Size  Used Avail Use% Mounted on/dev/sda3              18G  1.6G   16G  10% /tmpfs                 242M     0  242M   0% /dev/shm/dev/sda1             190M   48M  132M  27% /boot/dev/drbd1             15G   41M   14G   1% /usr/local/mfs[root@mfs-master ~]# netstat -tunlp|grep mfstcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      13845/mfsmaster     tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      13845/mfsmaster     tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      13845/mfsmaster

八、安装mfs-chun(整个环境是在vmware中完成的,所以只用了一台chunkserver)

[root@mfs-chun ~]# yum install zlib-devel gcc gcc-c++ make -y[root@mfs-chun ~]# groupadd -g 1000 mfs[root@mfs-chun ~]# useradd -u 1000 -g mfs mfs -s /sbin/nologin[root@mfs-chun ~]# tar xf mfs-1.6.27-5.tar.gz[root@mfs-chun ~]# cd mfs-1.6.27[root@mfs-chun mfs-1.6.27]# ./configure --prefix=/usr/local/mfs-1.6.27 --with-default-user=mfs --with-default-group=mfs  --disable-mfsmaster --disable-mfsmount[root@mfs-chun mfs-1.6.27]# make && make install[root@mfs-chun mfs-1.6.27]# ln -s /usr/local/mfs-1.6.27 /usr/local/mfs[root@mfs-chun mfs-1.6.27]# ll -d /usr/local/mfs lrwxrwxrwx 1 root root 21 Jul 23 09:49 /usr/local/mfs -> /usr/local/mfs-1.6.27

编辑mfs-chun的配置文件

[root@mfs-chun mfs-1.6.27]# cd /usr/local/mfs/etc/mfs/[root@mfs-chun mfs]# cp mfschunkserver.cfg.dist mfschunkserver.cfg[root@mfs-chu n mfs]# cp mfshdd.cfg.dist mfshdd.cfg[root@mfs-chun mfs]# egrep -v "^#|^$" mfschunkserver.cfgMASTER_HOST = 192.168.3.35        #该地址是前面架构中的vip地址#配置mfshdd.cfg#这里我们将/dev/sdb格式化,然后挂载到mfsdata下[root@mfs-chun mfs]# fdisk /dev/sdb[root@mfs-chun mfs]# mkfs.ext4 /dev/sdb[root@mfs-chun mfs]# mkdir /mfsdata[root@mfs-chun mfs]# chown -R mfs:mfs /mfsdata        #mfs用户需要有写权限[root@mfs-chun mfs]# mount /dev/sdb /mfsdata[root@mfs-chun mfs]# df -hFilesystem            Size  Used Avail Use% Mounted on/dev/sda3              18G  1.5G   16G   9% /tmpfs                 242M     0  242M   0% /dev/shm/dev/sda1             194M   28M  156M  16% /boot/dev/sdb               99G  188M   94G   1% /mfsdata#修改mfshdd.cfg,将/mfsdata交给mfs-master管理[root@mfs-chun mfs]# vim /usr/local/mfs/etc/mfs/mfshdd.cfg/mfsdata#启动mfs-chun的服务#测试mfs-master的服务是否起来[root@mfs-chun mfs]# nc -w 2 192.168.3.35 -z 9420Connection to 192.168.3.35 9420 port [tcp/*] succeeded![root@mfs-chun mfs]# /usr/local/mfs/sbin/mfschunkserver startworking direct ory: /usr/local/mfs-1.6.27/var/mfslockfile created and lockedinitializing mfschunkserver modules ...hdd space manager: path to scan: /mfsdata/hdd space manager: start background hdd scanning (searching for available chunks)main server module: listen on *:9422no charts data file - initializing empty chartsmfschunkserver daemon initialized properly[root@mfs-chun mfs]# netstat -lanpt |grep 9420tcp        0      0 192.168.3.36:51814          192.168.3.35:9420           ESTABLISHED 5948/mfschunkserver #查看mfs-master的日志[root@mfs-master ~]# tail /var/log/messages Jul 23 09:45:41 mfs-master mfsmaster[2147]: open files limit: 5000Jul 23 09:45:41 mfs-master heartbeat: [1674]: info: local HA resource acquisition completed (standby).Jul 23 09:45:41 mfs-master heartbeat: [1647]: info: Standby resource acquisition done [foreign].Jul 23 09:45:41 mfs-master heartbeat: [1647]: info: Initial resource acquisition complete (auto_failback)Jul 23 09:45:41 mfs-master heartbeat: [1647]: info: remote resource t ransition completed.Jul 23 09:58:30 mfs-master mfsmaster[2147]: connection with CS(192.168.3.36) has been closed by peerJul 23 09:58:30 mfs-master mfsmaster[2147]: chunkserver disconnected - ip: 192.168.3.36, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB)Jul 23 09:59:48 mfs-master mfsmaster[2147]: chunkserver register begin (packet version: 5) - ip: 192.168.3.36, port: 9422Jul 23 09:59:48 mfs-master mfsmaster[2147]: chunkserver register end (packet version: 5) - ip: 192.168.3.36, port: 9422, usedspace: 0 (0.00 GiB), totalspace: 0 (0.00 GiB)Jul 23 10:00:00 mfs-master mfsmaster[2147]: no meta loggers connected !!!#测试:将mfs-master的heartbeat服务停止[root@mfs-master ~]# /etc/init.d/heartbeat stopStopping High-Availability services: Done.#在mfs-slave上查看连接信息[root@mfs-slave ~]# netstat -anpt |grep mfstcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      3195/mfsmaster      tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      3195/mfsmaster      tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      3195/mfsmaster      tcp        0      0 192.168.3.35:9420           192.168.3.36:51815          ESTABLISHED 3195/mfsmaster  #可以看到mfs-chun一直处于连接状态#再将mfs-master的heartbeat服务重启启动[root@mfs-master ~]# netstat -anpt |grep mfstcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      3123/mfsmaster      tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      3123/mfsmaster      tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      3123/mfsmaster      tcp        0      0 192.168.3.35:9420           192.168.3.36:51817          ESTABLISHED 3123/mfsmaster #最后配置环境变量[root@mfs-chun ~]# echo '# add moosefs to the path variable' >> /etc/profile[root@mfs-chun ~]# echo 'PATH=/usr/local/mfs/sbin/:$PATH' >> /etc/profile [root@mfs-chun ~]# tail -2 /etc/profile# add moosefs to the path variablePATH=/usr /local/mfs/sbin/:$PATH[root@mfs-chun ~]# source /etc/profile#配置开机自启动[root@mfs-chun ~]# echo '# Configure the metalogger service startup' >> /etc/rc.local[root@mfs-chun ~]# echo '/usr/local/mfs/sbin/mfsmetalogger start' >> /etc/rc.local[root@mfs-chun ~]# tail -2 /etc/rc.local# Configure the metalogger service startup/usr/local/mfs/sbin/mfsmetalogger start

九、安装mfs-client

mfs的client端需要安装fuse才能使用mfs文件系统

[root@mfs-client ~]# lsmod |grep fuse        #结果显示没有fuse的模块[root@mfs-client ~]# wget http://jaist.dl.sourceforge.net/project/fuse/fuse-2.X/2.9.3/fuse-2.9.3.tar.gz[root@mfs-client ~]# tar xf fuse-2.9.3.tar.gz[root@mfs-client ~]# cd fuse-2.9.3[root@mfs-client fuse-2.9.3]# ./configure[root@mfs-client fuse-2.9.3]# make && make install#调整环境变量[root@mfs- client ~]# echo 'export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH' > /etc/profile[root@mfs-client ~]# tail -1 /etc/profileexport PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH#添加fuse模块到内核并开机自启动[root@mfs-client ~]# lsmod |grep fuse[root@mfs-client ~]# modprobe fuse[root@mfs-client ~]# lsmod |grep fusefuse                   69253  0 [root@mfs-client ~]# echo 'modeprobe fuse' >> /etc/sysconfig/modules/fuse.modules[root@mfs-client ~]# cat /etc/sysconfig/modules/fuse.modules modeprobe fuse[root@mfs-client ~]# chmod 755 /etc/sysconfig/modules/fuse.modules

mfs客户端需要安装mfsmount

[root@mfs-client ~]# yum install zlib-devel fuse-devel -y[root@mfs-client ~]# groupadd -g 1000 mfs[root@mfs-client ~]# useradd -u 1000 -g mfs mfs -s /sbin/nologin[root@mfs-client ~]# tar xf mfs-1.6.27-5.tar.gz[root@mfs-clie nt ~]# cd mfs-1.6.27[root@mfs-client mfs-1.6.27]# ./configure --prefix=/usr/local/mfs-1.6.27 --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver --enable-mfsmount[root@mfs-client mfs-1.6.27]# make && make install[root@mfs-client mfs-1.6.27]# ln -s /usr/local/mfs-1.6.27 /usr/local/mfs [root@mfs-client mfs-1.6.27]# ll -d /usr/local/mfslrwxrwxrwx 1 root root 21 Jul 23 10:15 /usr/local/mfs -> /usr/local/mfs-1.6.27[root@mfs-client mfs-1.6.27]# ll /usr/local/mfs/total 16drwxr-xr-x 2 root root 4096 Jul 23 10:15 bindrwxr-xr-x 3 root root 4096 Jul 23 10:15 etcdrwxr-xr-x 2 root root 4096 Jul 23 10:15 sbindrwxr-xr-x 4 root root 4096 Jul 23 10:15 share#创建数据目录挂载点[root@mfs-client mfs-1.6.27]# mkdir /mfsdata[root@mfs-client mfs-1.6.27]# chown -R mfs.mfs /mfsdata[root@mfs-client mfs-1.6.27]# ll -d /mfsdatadrwxr-xr-x 2 mfs mfs 4096 Jul 23 10:16 /mfsdata#挂载前的情况[root@mfs-client mfs-1.6.27]# df -hFilesystem            Size  Used Avail Use% Mounted on/dev/sda3              18G  1.4G   16G   8% /tmpfs                 242M     0  242M   0% /dev/shm/dev/sda1             194M   28M  156M  16% /boot#挂载mfs文件系统#注:这里的密码是上文的mfs-master配置中给定的[root@mfs-client mfs-1.6.27]# /usr/local/mfs/bin/mfsmount /mfsdata/ -H 192.168.3.35 -o mfspassword=centosmfsmaster accepted connection with parameters: read-write,restricted_ip,map_all ; root mapped to mfs:mfs ; users mapped to root:root[root@mfs-client mfs-1.6.27]# df -hFilesystem            Size  Used Avail Use% Mounted on/dev/sda3              18G  1.4G   16G   8% /tmpfs                 242M     0  242M   0% /dev/shm/dev/sda1             194M   28M  156M  16% /boot192.168.3.35:9421      93G     0   93G   0% /mfsdata        #这里显示已挂载上mfs文件系统#配置环境变量[root@mfs-client ~]# echo '# add moosefs to the path variable' >> /etc/profile [root@mfs-client ~]# echo 'PATH=/usr/local/mfs/bin/:$PATH' >> /etc/profile [root@mfs-client ~]# tail -2 /etc/profile# add moosefs to the path variablePATH=/usr/local/mfs/bin/:$PATH[root@mfs-client ~]# source /etc/profile#添加到开机自动挂载[root@mfs-client ~]# echo 'Moosefs boot automatically mount' >> /etc/rc.local[root@mfs-client ~]# echo '/usr/local/mfs/bin/mfsmount  /mfsdata/ -H 192.168.3.75 -o mfspassword=centos' >> /etc/rc.local[root@mfs-client ~]# tail -2 /etc/rc.local Moosefs boot automatically mount/usr/local/mfs/bin/mfsmount  /mfsdata/ -H 192.168.3.75 -o mfspassword=centos

最后测试:当mfs-master宕机后,mfs-client能否正常读写mfs文件系统

#在mfs文件系统中写入测试文件mfs-client1.txt[root@mfs-client ~]# touch /mfsdata/mfs-client1.txt[root@mfs-client ~]# ll /mfsdata/total 0-rw-r--r-- 1 mfs mfs 0 Jul 23 10:22 mfs-client1.txt#停止mfs-master的heartbeat服务[root@mfs-master ~]# /etc/init.d/heartbeat stopStopping High-Availability services: Done.#检查mfs-slave是否将资源接管过来[root@mfs-slave ~]# df -hFilesystem            Size  Used Avail Use% Mounted on/dev/sda3              18G  1.5G   16G   9% /tmpfs                 242M     0  242M   0% /dev/shm/dev/sda1             190M   48M  132M  27% /boot/dev/drbd1             15G   41M   14G   1% /usr/local/mfs[root@mfs-slave ~]# ip a |grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    inet 192.168.3.34/24 brd 192.168.3.255 scope global eth0    inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0[root@mfs-slave ~]# netstat -lanpt |grep mfstcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      4307/mfsmaster      tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      4307/mfsmaster      tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      4307/mfsmaster      tcp        0      0 192.168.3.35:9420           192.168.3.36:51820          ESTABLISHED 4307/mfsmaster      tcp        0      0 192.168.3.35:9421           192.168.3.37:44509          ES TABLISHED 4307/mfsmaster  #在客户端查询mfs文件系统[root@mfs-client ~]# ll /mfsdata/total 0-rw-r--r-- 1 mfs mfs 0 Jul 23 10:22 mfs-client1.txt    #可以正常查询#再创建一个测试文件[root@mfs-client ~]# touch /mfsdata/mfs-client2.txt[root@mfs-client ~]# ll /mfsdata/total 0-rw-r--r-- 1 mfs mfs 0 Jul 23 10:22 mfs-client1.txt-rw-r--r-- 1 mfs mfs 0 Jul 23 10:24 mfs-client2.txt#将mfs-master的heartbeat服务重新启动[root@mfs-master ~]# /etc/init.d/heartbeat startStarting High-Availability services: INFO:  Resource is stoppedDone.[root@mfs-master ~]# df -hFilesystem            Size  Used Avail Use% Mounted on/dev/sda3              18G  1.6G   16G  10% /tmpfs                 242M     0  242M   0% /dev/shm/dev/sda1             190M   48M  132M  27% /boot/dev/drbd1             15G   41M   14G   1% /usr/local/mfs[root@mfs-master ~]# ip a|grep eth02: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0    inet 192.168.3.35/24 brd 192.168.3.255 sc ope global secondary eth0[root@mfs-master ~]# netstat -anpt |grep mfstcp        0      0 0.0.0.0:9419                0.0.0.0:*                   LISTEN      4087/mfsmaster      tcp        0      0 0.0.0.0:9420                0.0.0.0:*                   LISTEN      4087/mfsmaster      tcp        0      0 0.0.0.0:9421                0.0.0.0:*                   LISTEN      4087/mfsmaster      tcp        0      0 192.168.3.35:9420           192.168.3.36:51822          ESTABLISHED 4087/mfsmaster      tcp        0      0 192.168.3.35:9421           192.168.3.37:44524          ESTABLISHED 4087/mfsmaster #再次在mfs-client上查看mfs文件系统并写入一个测试文件[root@mfs-client ~]# ll /mfsdata/total 0-rw-r--r-- 1 mfs mfs 0 Jul 23 10:22 mfs-client1.txt-rw-r--r-- 1 mfs mfs 0 Jul 23 10:24 mfs-client2.txt[root@mfs-client ~]# touch /mfsdata/mfs-client3.txt[root@mfs-client ~]# ll /mfsdata/total 0-rw-r--r-- 1 mfs mfs 0 Jul 23 10:22 mfs-client1.txt-rw-r--r-- 1 mfs mfs 0 Jul 23 10:24 mfs-client2.txt-rw-r--r-- 1 mfs mfs 0 Jul 2 3 10:26 mfs-client3.txt#以上整个过程正常,没有任何的卡顿现象发生

到此heartbeat+drbd+mfs的高可用基本完成

Leave a Comment

Your email address will not be published.