heartbeat+httpd HA installation preparations
Description
1, server information
First verify the basic functions of HA through heartbeat, two nodes The server provides httpd service, verify that the heartbeat can be configured correctly and start the service, and switch the two nodes to provide services normally.
Server configuration environment description
Server 1: ha1.dawu.com ha1 eth0 192.168.1.101 Server 2: ha2. dawu.com ha2 eth0 192.168.1.102 VIP information: VIP: 192.168.1.10
This verification is relatively simple. There is no heartbeat line, and communication is through the eth0 network card.
2, host name
ha1 host content to be modified
vim /etc/hosts192.168.1.101 ha1.dawu.com ha1192.168.1.102 ha2.dawu.com ha2[root@ha1~]# vim/etc/sysconfig/networkNETWORKING=yesHOSTNAME=ha1.dawu.comGATEWAY=192.168 .1.1[root@ha1 ~]# uname -nha1.dawu.com
Content to be modified by the ha2 host
vim /etc/hosts192.168.1.101 ha1.dawu.com ha1192.168.1.102 ha2.dawu.com ha2[root@ha1~]# vim/etc/sysconfig/networkNETWORKING=yesHOSTNAME=ha1.dawu.comGATEWAY=192.168. 1.1[root@ha1 ~]# uname -nha2.dawu.com
Verify the host name
Ping on the ha1 node ha2.dawu.com[root@ha1 ~]# ping ha2.dawu.comPING ha2.dawu.com (192.168.1.102) 56(84) bytes of data.64 bytes from ha2.dawu.com (192.168.1.102): icmp_seq=1 ttl=64 time=1.66 ms ping ha1.dawu.com[root@ha2 html]# ping ha1.dawu.comPING ha1.dawu.com (192.168.1.101) 56(84) bytes of data. 64 bytes from ha1.dawu.com (192.168.1.101): icmp_seq=1 ttl=64 time=0.171 ms
Time server synchronization settings
Two servers do the same settings
[root@ha2 html]# crontab -e*/5**** /usr/sbin/ntpdate ntp.ubuntu.com&>/dev/null
ssh key Authentication as a mutual trust mechanism
[root@ha1~]#ssh-keygen-trsa-f~/.ssh/id_rsa-P''#Generate password Information Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in/root/.ssh/id_rsa.pub.The key fingerprint is:e6:b4:3b :4b:a9:47:fb:a3:8d:37:ea:e5:eb:18:ae:f1 [email protected] The key's randomart image is:+--[RSA 2048]----+ | || || || || S || + .o || .. * .. || * + O + || o + EO * + | + -------------- ---+[root@ha1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected] #copy the public key to the ha2 node The authenticity of host'ha2.dawu. com (192.168.1.102)' can't be established.RSA key fingerprint is 81:64:f4:af:ae:fc:6a:d1: 1c:a2:e4:49:4e:be:d5:7f.Are you sure you want to continue connecting (yes/no)? yes #Enter yes and enter Warning: Permanently added'ha2.dawu.com, 192.168. 1.102' (RSA) to the list of known [email protected]'s password: #Enter the password of the node ha2. Now try logging into the machine, with "ssh'[email protected]'", and check in: .ssh/authorized_keysto make sure we haven't added extra keys that you weren't expecting.
< p>Do the same for the ha2 node server
[root@ha2 html]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P' 'Generating public/private rsa key pair.Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in/root/.ssh/id_rsa.pub.The key fingerprint is:b9:88:f9 :99:c4:ba:c1:9a:b1:3e:ab:17:94:ee:4a:96 [email protected] The key's randomart image is:+--[RSA 2048]----+ |.. || || || o || o S || + + || E .. = + || o + = = o || o = * oo + |..... + ---- -------------+[root@ha2 html]# ssh-copy-id -i~/.ssh/id_rsa.pub [email protected] authentication of host'ha1.dawu .com (192.168.1.101)' can't be established.RSA key fingerprint is fb:0a:c8:27:ff:c1:62:de:d9:45:0f:08:9a:75:d9:58. Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added'ha1.dawu.com,192.168.1.101' (RSA) to the list of known [email protected]'s password: Now try logging into the machine , With "ssh'[email protected]'", and check in: .ssh/authorized_keysto make sure we haven't added extra keys that you weren't expecting.
So far, both parties ssh The mutual trust mechanism has been set up.
Check whether the server time of both nodes is synchronized through ssh mutual trust.
[root@ha1 ~]# date;ssh ha2'date'Thu Oct 22 11:41:13 CST 2015Thu Oct 22 11:41 :13 CST 2015
The time of the node servers on both sides has been synchronized.
The two nodes provide httpd services respectively
The content of the httpd webpage of the ha1 node is : Ha1.dawu.com verifies whether the webpage is accessed normally. The content of the httpd webpage of the ha2 node is: ha2.dawu.com to verify whether the webpage is normally accessed.
When switching HA roles, check whether the roles can be switched normally.
Turn off the httpd service, turn off the httpd service and start it up, because the httpd service will be controlled by the heartbeat service.
[root@ha2~]#service httpd stopStopping httpd: httpd stopStopping httpd: config [httpd] config[root@ha2][root@ha2]
Do the same configuration for the two server nodes and turn off the httpd service.
Preparations have been completed.
heartbeat HA high availability cluster installation
1. Install the required dependency packages
yum install PyXML net-snmp-libs perl-MailTools gettext gnutls libtool-ltdl
2. Install libnet
libnet is in the epel source. Download the epel source installation package. After the installation is complete, there will be no more dependency on libnet.so.1()(64bit ) is needed by heartbeat-2.1.4-12.el6.x86_64
wget https://dl.fedoraproject.org/pub/epel /6/x86_64/libnet-1.1.6-7.el6.x86_64.rpm
Required installation package
[root@ha2 heartbeat2]# ls
heartbeat-2.1.4-12.el6.x86_64.rpm
heartbeat-debuginfo-2.1.4-12.el6.x86_64.rpm
heartbeat-devel-2.1 .4-12.el6.x86_64.rpm
heartbeat-gui-2.1.4-12.el6.x86_64.rpm
heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm
heartbeat -pils-2.1.4-12.el6.x86_64.rpm
heartbeat-stonith-2.1.4-12.el6.x86_64.rpm
libnet-1.1.6-7.el6.x86_64.rpm p>
3. Install heartbeat
The installation is very simple, as long as the dependencies are resolved, there will be no problems.
[root@ha2 ~]# rpm -ivh libnet-1.1.6-7.el6.x86_64.rpm [root@ha2 ~] #Rpm -ivh heartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm
Do the same installation configuration for both node servers.
4. Heartbeat configuration
Three configuration files are required :authkeys, haresources, ha.cf. These three configuration files need to be under the /etc/ha.d directory, but these three files are not available by default, and need to be copied under the /usr/share/doc/heartbeat-2.1.4/ directory
#copy These three files go to the /etc/ha.d directory
[root@ha1 ~]# cp /usr/share/doc/heartbeat- 2.1.4/{authkeys,haresources,ha.cf} /etc/ha.d/[root@ha1 ~]#ls /etc/ha.d/authkeys ha.cf harc haresources rc.d README.config resource.d shellfuncs
4.1 Configure the authkeys file: the authentication mechanism and authentication password used for HMAC information encryption (The permission of this file must be 600 , Otherwise heartbeat cannot be started)
[root@ha1 ~]# chmod 600 /etc/ha.d/authkeys
[root@ha1 ~]# ll /etc/ha.d/authkeys < br>-rw——-. 1 root root 645 Oct 22 14:03 /etc/ha.d/authkeys
4.2 Configure the key used for authkeys authentication
strong>
Method 1: [root@ha1 Heartbeat2]#openssl rand-hex8>>/etc/ha.d/authkeys#Generate The key is appended to the authkeys file. Method 2: [[email protected]]#dd if=/dev/randomcount=1 bs=512|md5sum 0+1 records in0+1 records out63bytes(63B)copied, 0.000106155 s, 593 kB/sf6fbf1ae8d574e2f915d4a3bd9783f74
[root@ha1~]#vim/etc/ha.d/authkeys#Modify to the following configuration, use sha1 to encrypt auth2#1crc#2sha1HI!#3md5Hello!2sha1d1856d8377ad7bdb pre>4.3 Set the core configuration file ha.cf
[root@ha1 ~]# grep- v'#' /etc/ha.d/ha.cf logfile /var/log/ha-logkeepalive 1deadtime 30warntime 10udpport 694mcast eth0 225.0.1.1 694 1 0auto_failback onnode ha1.dawu.comnode ha2.dawu.compression 192.168.1.100compression 192.168.1.100 2Heartbeat parameter description
logfile /var/log/ha-log
Specify the storage location of heartbaet's ha-log log. There are two ways to record the heartbeat log. You can specify the ha-log file or in The rsyslog.conf log configuration file is used to record logs. Heartbeat uses rsyslog to record logs by default. Here, the designated record is turned on in /var/log/ha-log for easy query.keepalive 1000ms
Specify the heartbeat interval time as 1 second (ie every two seconds Zhong sends a broadcast on eth1)deadtime 8
Specify the standby node at 8 After not receiving the heartbeat signal of the master node within seconds, it will immediately take over the service resources of the master nodewarntime 3
Specify the heartbeat delay time as 3 seconds. When the backup node cannot receive the heartbeat signal of the primary node within 3 seconds, a warning log will be written to the log, but the service will not be switched at this timeudpport 694
Set the port used for broadcast communication, 694 is the port number used by default.auto_failback on
Used to define whether the service is automatically cut off when the master node is restored Back. Under normal circumstances, the master node occupies resources and runs all services. When a failure occurs, the resource is handed over to the backup node, and the backup node takes over the service and runs. When this option is set to on, once the primary node resumes operation, it will automatically acquire resources and take over the backup node. If this option is set to off, when the primary node is restored, it will become the backup node, and the original backup node will become the primary nodemcast eth0 225.0.1.1 694 1 0
Udp multicast of network card eth0 to organize heartbeat , Generally used when there is more than one spare node. Bcast, ucast and mcast stand for broadcast, unicast and multicast respectively. They are three ways to organize heartbeats. You can choose one of them.node ha1.dawu.com
Master node The host name can be obtained by viewing the command "uanme �n".
node ha2.dawu.com
< p>Host name of the standby node
ping 192.168.1.100
Select the ping node, ping The better the node selection, the stronger the HA cluster. You can choose a fixed router as the ping node, but it is best not to select the members of the cluster as the ping node. The ping node is only used to test the network connection.compression bz2
Compressing the transmitted data is optionalcompression_threshold 2
means less than 2k will not be compressed
Modify haresources file
[root@ha1~]#grep-v" #"/Etc/ha.d/haresourcesha1.dawu.com 192.168.1.10/24/eth0 httpdDescription:
ha1.dawu.com cluster node192.168.1.10/24/eth0 cluster VIP/mask/that network card
httpd defines the service resource httpdvip192.168.1.10 has priority access to the httpd service on the ha1.dawu.com service node. If the node goes offline, transfer to the second node (in If the node defined in ha.cf is restored, if ha1.dawu.com comes back online to take over the service, the cluster service resources will be retaken.
The above are all the configuration operations in the ha1 node. Now you need to copy the three files you just modified to the ha2 node server[root@ha1 ha.d] # scp -p ha.cf haresources authkeys ha2: /etc/ha.d/ ha.cf 100% 10KB 10.3KB / s 00:00 haresources 100% 5946 5.8KB / s 00:00 authkeys 100% 668 0.7KB/s 00:00After the copy is completed, all configuration work is basically completed.
Start the heartbeat service
[root @ha1 ha.d]# service heartbeat start #Starting the heartbeat service Starting High-Availability services: 2015/10/22_14:28:22 INFO: Resource is stoppedDone.Check if the port is started successfully .
[[email protected]]#netstat-unlpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Address Foreign Address State PID / Program name udp 0 0 0.0.0.0:59828 0.0.0.0:* 26435 / heartbeat: wr udp 0 0 225.0.1.1:694 0.0.0.0:* 26435 / heartbeat: wr pre>
Check if vip is normally enabled.[[email protected]]# ifconfigeth0 Link encap:Ethernet HWaddr 00:0C:29:DF:C0:9A inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fedf:c09a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU: 1500 RX over 0 706 drop Metric: 236 RX error: 236 0 frame:0 TX packets: 222053 errors:0dropped:0overruns:0 carrier:0collisions:0txqueuelen:1000RXbytes:554709941(529.0MiB)Ethernetcap:209.8Ethernet:2079byte 00:0C:29:DF:C0:9A inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1>
View log information
[[email protected]]#tail/var/log/ha-logip-request- resp[26573]: 2015/10/22_14:28:52 received ip-request-resp 192.168.1.10/24/eth0 OK yesResourceManager[26592]: 2015/10/22_14:28 :52 info: Acquiring resource group: ha1.dawu.com 192.168.1.10/24/eth0 httpdIPaddr[26618]: 2015/10/22_14:28:52 INFO: Resource is stoppedResourceManager[26592]: 2015/10/22_14:28 :52 info: Running /etc/ha.d/resource.d/IPaddr 192.168.1.100/24/eth0 startIPaddr[26715]: 2015/10/22_14:28:52 INFO: Using calculated netmask for 192.168.1.10: 255.255. 255.0IPaddr[26715]: 2015/10/22_14:28:52 INFO: eval ifconfig eth0:0 192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255IPaddr[26686]: 2015/10/22_14:28:52 INFO: SuccessResourceManager [26592]: 2015/10/22_14:28:52 info: Running /etc/init.d/httpd startheartbeat[26430]: 2015/10/22_14:29:02 info: Local Resource acquisition completed. (none)heartbeat[ 26430]: 2015/10/22_14:29:02 info: local resource transition completed.The ha1 node server has been started successfully.
Start the ha2 node service, and start the ha2 node service through the ha1 node.[root@ha1 ha.d]# ssh ha2'service heartbeat start'Starting High-Availability services: 2015/10/22_14:32 :51 INFO: Resource is stoppedDone.Startup successfully
Test the heartbeat service
Whether the switch is normal
1. Log in to the web page to view the content of the web page. If ha1.dawu.com is displayed, the service has been successful and the service is provided by the host of ah1.dawu.com.2. Stop the ha1 node service.
The method to stop heartbeat can use service heartbeat stop or hb_standby to switch the standby node service.[root@ha1 ha.d]# /usr/lib64/heartbeat/hb_standby 2015/10/22_14:46:18 Going standby[ all].[root@ha1 Ha.d]#Check the content of the webpage on the second login webpage, which is displayed as ha2.dawu.com, indicating that the ha2 node provides services, and the heartbeat dual machine is here. The installed configuration is started, the service can be used normally and the service is switched, and the basic functions have been realized.
< strong>heartbeat installs the HA dual-system hot backup solution based on nfs
The heartbeat dual-system hot backup solution based on local files has been installed by the above. The network files are stored on two local hosts and the files must be consistent in real time , This time we will use the NFS-based shared storage solution to mount the NFS file system to implement the HA dual-system hot backup solution. This experiment is only to implement the heartbeat plug-in other storage solutions. NFS is generally not used in the production environment, but this experiment is for The realization principle, other storage methods are basically the same.1 Preparations
1.1 Add a server to realize NFS shared storage[root@nfs ~]# uname -nnfs.dawu.com ip address: 192.168.1.1001.2 Install nfs
[root@nfs ~]# yum install nfs*1.3 Creating directories and web files
[root@nfs~]#mkdir/data/html-pvmkdir:createddirectory`/data/html'[root@nfsdata]#vim/data/html/index.htmlNFS share!
1.4 Create a share
[root@nfs data]# vim /etc/exports/data/html 192.168.1.0/24(rw,sync,no_root_squash)[root@nfs data]# exportfs -arvexporting 192.168.1.0/24:/data/html1.5 Enable nfs Service, and set to start up
[root@nfs data]# service nfs startStarting NFS services: startStarting NFS start: OK mount NFS ]Starting NFS daemon: [OK]S tarting RPC idmapd: tarting RPC idmapd: tarting RPC idmapd: Test 1 node mounting nfs2.1. Test 1 node mounting nfs2.1. Load nfs
Check whether the nfs-utils package is installed
[root@nfsdata]#rpm-qa nfs-utilsnfs-utils- 1.2.3-64.el6.x86_64If not installed
yum install nfs-utils< pre class="brush:bash;toolbar:false">[[email protected]]#mount-tnfs 192.168.1.100:/data/html/var/www/html/[[email protected]]# mount #View the mount information and show that nfs 192.168.1.100:/data/html has been mounted on /var/www/html type nfs (rw,vers=4,addr=192.168.1.100,clientaddr=192.168.1.101)[ [email protected]]#cat /var/www/html/index.html#Check whether the webpage information is normal NFS share![[email protected]]#umount/var/www/html/#umountmounted Directory
ha2 node test mount nfs
[root@ha2 heartbeat2]#mount-t nfs 192.168.1.100: /data/html/var/www/html/[root@ha2heartbeat2]#mount#View the mount information, it shows that NFS is mounted 192.168.1.100:/data/html on /var/www/html type nfs (rw ,v ers=4,addr=192.168.1.100,clientaddr=192.168.1.102)[root@ha2 heartbeat2]#cat /var/www/html/index.html#Check whether the webpage information is normal NFS share![root@ha2 heartbeat2]# umount /var/www/html/ #umount mounted directoryha1 and ha2 nodes have successfully tested nfs mounting
3. Modify the haresources configuration file
[root@ha1 ha.d]# vim haresources ha1.dawu.com 192.168. 1.10/24/eth0 Filesystem::192.168.1.100:/data/html::/var/www/html::nfs httpdBrief description of the parameters: The parameter passing interval is:: The name of the resource agent must be the same as the name of the resource agent (script), case sensitive. The first paragraph Filesystem: indicates the type of the resource agent. The second paragraph 192.168.1.100:/data/html indicates: that The NFS file sharing directory of the host. The third section /var/www/html: indicates the location to be mounted. The fourth section nfs: the type of the file system [root@ha1 ha.d]# scp haresources ha2:/etc /ha.d/ #Copy a configuration file to ha2haresources to start the service of ha2haresources 600< 00:00> 600 <5.9. p>[root@ha1 ha.d]# service heartbeat start;ssh ha2'service heartbeat start'Starting Hi gh-Availability services: 2015/10/22_16:37:22 INFO: Resource is stoppedDone.Starting High-Availability services: 2015/10/22_16:37:23 INFO: Resource is stoppedDone.5. Test webpage
Login http://192.168.1.10/ and the display is normal.Check the mount information on the ha1 node, indicating that the nfs file system has been mounted
[[email protected]]#mount192.168.1.100:/data/htmlon/var/www/htmltypenfs(rw, vers=4,addr=192.168.1.100,clientaddr=192.168.1.101)Stop the ha1 node service and check whether the ha2 node is switched normally
[[email protected]]#/usr/lib64/heartbeat/hb_standby 2015/10/22_16:41:58 Goingstandby[all].[root@ha2heartbeat2]#mount192.168.1.100: /data/html on/var/www/html type nfs (rw,vers=4,addr=192.168.1.100,clientaddr=192.168.1.102)Heartbeat HA based on the nfs system has basically been implemented
The realization of HA mysql+wordpress based on heartbeat v2 version crm
Description:
Through LAMP combination, based on heartbeat v2 version crm to achieve HA. Deploy wordpress to realize that any data in the article for editing can be used after the node is switched Normal access;
1.1 Stop he artbeat service
[root@ha1 heartbeat2]# service heartbeat stop;ssh ha2'service heartbeat stop'Stopping High-Availability services: Done.Stopping High-Availability services: Done.1.2 Create a mysql user on the nfs server
[root@nfshtml]# groupadd-g 27 mysql[ root@nfs html]# useradd -u 27 -g mysql -s /sbin/nologin -M mysql[root@nfs html]# id mysqluid=27(mysql) gid=27(mysql) groups=27(mysql) pre>Note: The mysql user gid and uid created on the nfs server must be consistent with the mysql users ha1 and ha2
1.3 Create data files and modify permissions
[root@nfs html]#chown-R mysql.mysql/data/data/[root@ nfs html]# ll /data/data/ -ddrwxr-xr-x. 2 mysql 4096 Oct 22 22:11 /data/data/1.4 Create apache user on nfs server
ha1:
[root@ha1 Heartbeat2]# id apacheuid=48(apache) gid=48(apache) group s=48(apache)ha2:
[root@ha2 ~]# id apacheuid=48(apache) gid=48(apache) groups=48(apache)nfs:
[root@nfs html]# id apacheuid=48(apache) gid=48(apache) groups=48(apache)Note: The gid and uid of the above three units are the same
1.5 modify permissions
[root@nfs html]#chown-R apache.apache/data/html/[root@nfshtml]#ll/data /html/ -ddrwxr-xr-x. 3 apache 4096 Oct 22 21:18 /data/html/1.6 Add shared storage
[root@nfs ~]# vim /etc/exports/data/ html 192.168.1.0/24(rw,no_root_squash)/data/data 192.168.1.0/24(rw,no_root_squash)[root@nfs ~]# exportfs -avrexporting 192.168.1.0/24:/data/dataexporting 192.168.1.0/24 :/data/html1.7 restart nfs service
[root@nfs ~]# service nfs restart p>
Shutting down NFS daemon: [OK] p>
Shutting down NFS mountd: [OK] p>
Shutting down NFS services: [OK] p> < p> Shutting down RPC idmapd: [OK] p>
Starting NFS services: [OK] p>
Starting NFS mountd: [OK] p>
Starting NFS daemon: [ OK]
Starting RPC idmapd: Starting RPC idmapd: [OK]
p>1.8 download wordpress
[root@nfs source]# wget https://cn.wordpress.org/wordpress-4.3.1- zh_CN.tar.gz--2015-10-22 21:14:29-- https://cn.wordpress.org/wordpress-4.3.1-zh_CN.tar.gzResolving cn.wordpress.org... 66.155 .40.249, 66.155.40.250Connecting to cn.wordpress.org|66.155.40.249|:443...connected.HTTP request sent, awaiting response...200 OKLength: 6922520(6.6M)[application/octet-stream]Saving to: "wordpress-4.3.1-zh_CN.tar.gz" 100%[============================== ========================>] 6,922,520 221K/s in 58s 2015-10-22 21:15:27 (117KB/s)- "Wordpress-4.3.1-zh_CN.tar.gz" saved [6922520/6922520]p>
1.9 Unzip wordpress
[root@nfs source]# tar-zxvf wordpress-4.3.1-zh_CN.tar.gz-C/ data/html/ #Unzip to /data/html directory to view /data/html directory[root@nfs html]#ls /data/html/index.html wordpress1.10 Install mysql on ha1 and ha2
[root@ha1 heartbeat2]# yum -y install mysql mysql-server mysql-devel2. Test the mounting of two nodes
2.1 Mount nfs wordpress on the ha1 node
[root@ha1 Heartbeat2]#mount -t nfs 192.168.1.100:/data/html /var/www/html/[r oot@ha1 heartbeat2]# mount192.168.1.100:/data/html on /var/www/html type nfs (rw,vers=4,addr=192.168.1.100,clientaddr=192.168.1.101)[root@ha1heartbeat2] #Ls /var/www/html/#View the wordpress file as normal index.html wordpress2.2 Mount nfs mysql on the ha1 node
[root@ha1 heartbeat2]#mkdir/mydata/data-p[root@ha1heartbeat2]#mount-tnfs 192.168.1.100:/data/data/mydata/data/[root@ha1heartbeat2]#mount192.168.1.100 :/data/html on/var/www/html type nfs (rw,vers=4,addr=192.168.1.100,clientaddr=192.168.1.101)192.168.1.100:/data/data on/mydata/data type nfs (rw ,vers=4,addr=192.168.1.100,clientaddr=192.168.1.101)2.3 Initialize the mysql database
[root@ha1 ~]# vim /etc/my.cnf[mysqld]datadir=/mydata/data #Modify database data file storage path[root@ha1heartbeat2]#/usr/bin/mysql_install_db--user=mysql--datadir=/mydata /data/Installing MySQL system tables...OKFilling help tables...OKTo start mysqld at boot time you have to copysupport-files/mysql.server to the right place for your systemPLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER! To do so, start the server, then issue the following commands: /usr/bin/mysqladmin -u root password'new-password'/usr/bin /mysqladmin -u root -h ha1.dawu.com password'new-password'Alternatively you can run:/usr/bin/mysql_secure_installationwhich will also give you the option of removing the testdatabases and anonymous user created by default. production servers.See the manual for more instructions.You can start the MySQL daemon with: cd /usr; /usr/bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run.plcd /usr/mysql-test; perl mysql-test-run.pl Please report any problems with the /usr/bin/mysqlbug script!2.4 Start mysql
[root @ha1 ~]# service mysqld start #启动成功2.5 卸载挂载设置
[root@ha1 ~] # umount /mydata/data/ umount.nfs: /mydata/data: device is busy #数据库需要先停止umount.nfs: /mydata/data: device is busy[root@ha1 ~]# service mysqld stopStopping mysqld: [ OK ][root@ha1 ~]# umount /mydata/data/ #卸载成功[root@ha1 data]# service httpd stop[root@ha1 data]# umount /var/www/html/2.6在节点ha2上测试挂载点和数据库
[root@ha2 ~]# vim /etc/my.cnf[mysqld]datadir=/mydata/data #修改数据库数据文件存放路径[root@ha2 ~]# mount -t nfs 192.168.1.100:/data/data /mydata/data #挂载mount.nfs: mount point /mydata/data does not exist#/mydata/data目录不存在[root@ha2 ~]# mkdir /mydata/data -p #创建目录[root@ha2 ~]# mount -t nfs 192.168.1.100:/data/data /mydata/data #挂载成功[root@ha2 ~]# ll -d /mydata/data/#查看目录权限drwxr-xr-x. 4 mysql mysql 4096 Oct 23 03:37 /mydata/data/[root@ha2 ~]#2.7重启数据库
[root@ha2 ~]# service mysqld restartStopping mysqld: [ OK ]Starting mysqld: [ O K ][root@ha2 ~]# mysql #连接数据库Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 2Server version: 5.1.73 Source distributionCopyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases;#查看数据库+--------------------+| Database |+--------------------+| information_schema || mysql || test |+--------------------+3 rows in set (0.01 sec)mysql> create database testdb;#创建数据库testdbQuery OK, 1 row affected (0.01 sec)mysql> quitBye3.安装heartbeat-gui
3.1在节点ha1和节点ha2上分别执行如下操作
[root@ha1 ~]# cd /usr/local/src/rpm/heartbeat2/[root@ha1 heartbeat2]# lsheartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-ldirectord-2 .1.4-12.el6.x86_64.rpmheartbeat-debuginfo-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpmheartbeat-devel-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpmheartbeat-gui-2.1.4-12.el6.x86_64.rpm libnet-1.1.6-7.el6.x86_64.rpm[root@ha1 heartbeat2]# yum localinstall heartbeat-gui-2.1.4-12.el6.x86_64.rpm3.2配置ha.cf文件
[root@ha1 heartbeat2]# vim /etc/ha.d/ha.cf crm on #添加一行复制一份给ha2[root@ha1 heartbeat2]# scp /etc/ha.d/ha.cf 192.168.1.102:/etc/ha.d/ha.cf 100% 10KB 10.3KB/s 00:003.3启动heartbeat
[root@ha1 ~]# service heartbeat start;ssh ha2 'service heartbeat start'Starting High-Availability services: Done.Starting High-Availability services: Done.[root@ha1 ~]# crm_monLast updated: Fri Oct 23 22:08:31 2015Current DC: ha2.dawu.com (ab5b0dc8-c583-4381-8185-11dc12dc5755)2 Nodes configured.0 Resourc es configured.============Node: ha2.dawu.com (ab5b0dc8-c583-4381-8185-11dc12dc5755): onlineNode: ha1.dawu.com (806ae335-3fbc-4747-b94b-89076a75a74a): online[root@ha1 ~]# hb_gui Traceback (most recent call last): File "/usr/bin/hb_gui", line 41, in ? import gtk, gtk.glade, gobject File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 76, in ? _init() File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 64, in _init _gtk.init_check() RuntimeError: could not open display在xshell启动hb_gui出错下面我们来说一下,解决方法:
具体步骤如下:File �> Properties �> SSH �> Tunneling �> Forward X11 connections to: Xmanager,然后重新启动一下xshell,再进行测试。
以上步骤重新启动后如出现如下警告:
The remote SSH server rejected X11 forwarding request.
请检查sshd的配置文件有没有不对的地方
X11Forwarding yes参数设置如果没有问题,解决如下:yum install xorg-x11-xauth如果出现乱码:yum install xorg-x11-font*费尽千心hb_gui终于正常了。4. 配置crm
要使用crm资源管理器,我们得配置一下用户名和密码,heartbeat安装好以后会默认给我新建一个hacluster用户,用来管理集群资源的,下面我们先来给hacluster配置一下密码hacluster,即用户名为hacluster,密码也为hacluster。 (注,crm配置文件在DC上才会生效)
ha1:
[root@ha1 ~]# tail /etc/passwdsaslauth:x:499:76:Saslauthd user:/var/empty/saslauth:/sbin/nologinpostfix:x:89:89::/var/spool/postfix:/sbin/nologinsshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologinntp:x:38:38::/etc/ntp:/sbin/nologinapache:x:48:48:Apache:/var/www:/sbin/nologinmysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bashhacluster:x:498:498:heartbeat user:/var/lib/heartbeat/cores/hacluster:/sbin/nologinrpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologinrpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologinnfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin[root@ha1 ~]#[root@ha1 ~]# passwd haclusterChanging password for user hacluster.New password: BAD PASSWORD: it is based on a (reversed) dictionary wordBAD PASSWORD: is too simpleRetype new password: passwd: all authentication tokens updated successfully.ha2:
[root@ha2 ~]# tail /etc/passwdbasher:x:505:506::/home/basher:/bin/bashnologin:x:506:507::/home/nologin:/sbin/nologinuser1:x:507:508::/home/user1:/bin/bashntp:x:38:38::/etc/ntp:/sbin/nologinapache:x:48:48:Apache:/var/www:/sbin/nologinmysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bashhacluster:x:497:497:heartbeat user:/var/lib/heartbeat/cores/hacluster:/sbin/nologinrpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologinrpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologinnfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin[root@ha2 ~]# passwd haclusterChanging password for user hacluster.New password: BAD PASSWORD: it is based on a (reversed) dictionary wordBAD PASSWORD: is too simpleRetype new password: passwd: all authentication tokens updated successfully.[root@ha2 ~]#4.1 crm图形界面配置详解
4.1 登录DC节点
说明:一般我们用图形界面配置资源时,都连接到DC节点,在DC节点配置完成所有资源后,DC节点会自动同步资源到其它节点中。[root@ha1 ~]# hb_gui &[1] 34103(1).点击连接 �>选择登录
(2).输入DC节点名称或IP地址,并输入密码,点击确定
(3).登录后默认显示界面
4.2.高可用Web集群中资源
VIPhttpd
filesystem
Mysql
说明:高可用Web集群中有四个资源,分别是VIP、httpd服务、mysql服务、filesystem(NFS,用来存放Web文件和mysql数据库文件),下面我们就来配置一下这四个资源。
4.3.crm 增加资源
(1).新增资源(2).新增group资源(注,VIP、httpd、mysql、filesystem都是Web高可用集群,所以都在一个组中,这里我们选择group)
(3).给组资源增加一个ID号
(4).新增VIP,给VIP取个ID号为webip,设置ip为192.168.1.10,大家还可以看到VIP属于Web Service组
(5).增加VIP参数,如子网掩码等
(6).设置VIP在哪个端口别名上
(7).增加好的webip,现在还没有运行,我们可以右击使其运行
(8).启动webip
(9).这是已启动的webip运行在DC节点上
(10).以次添加nfs-mysql、mysql、nfs-wordpress、httpd
(11).添加完结果如下:
(12).启动group后连接http://192.168.1.10/wordpress地址
(13).创建wordpress数据库,配置安装wordpress
(14).在wordpress上添加一条文章
(15).切换节点
(16).查看wordpress,新添加的文章依然存在
好了,基于heartbeat v2版crm 实现HA mysql+wordpress已完成,后续有时间在详细介绍其它功能。