Graphics interface configuration HeartBeat high available cluster

Modify the ha.cf configuration file and enable the crm function, which means that the resource configuration is handed over to the crm management

[root@node1 ~]# vim /etc/ha.d/ha.cfcrm on

Use the built-inha_propagateScript synchronization configuration file

[root@node1 ~]#/usr/lib/heartbeat/ha_propagate

Installheartbeat-guiPackage

[root@node1 Heartbeat2]#rpm-ivhheartbeat-gui-2.1.4-12.el6.x86_64.rpm[root@node2heartbeat2]#rpm-ivhheartbeat-gui -2.1.4-12.el6.x86_64.rpm

The installation is complete and startheartbeatService

service heartbeat start;ssh node2'serviceheartbeat start'

You need to enter the user name and password of the node to connect to the graphical interface, and a hacluster is automatically created during installation. User, set a password and you can connect.

[root@node1]~ ]#Echo"hacluster"|passwd--stdinhacluster[root@node2~]#echo"hacluster"|passwd--stdinhacluster

Enterhb_guiStart the graphical configuration interface (the terminal used isXmanagerEnterprise version, if it isCRT,putty cannot be opened )

Description :hb_guiOpen the default display English, if you want to set it to Chinese, you need to modify the localeLANG=”zh_CN.GB18030″

Opened interface

Add new resources

Add a new resource, select common resource for resource type

Element type description:

Common resources, Resource group: define resource type

location, order, collaboration: define resource Constraints

Select the detailed settings of the resource

Note that when adding resources, please includerequired,uniqueis a required parameter.

Resource addition completed

Start the resource

Loginnode2View resource startup

[root@node2~]# ifconfigeth0 Link encap:Ethernet HWaddr00:0C:29:F1:DD:B2 inet addr:172.16.4.101 Bcast:172.16.255.255: addMask: 255.255.0.0 inet: 29:20 :fef1:ddb2/64 Scope: Link: UPBROADCAST RUNNING MULTICAST MTU: 1500 Metric:1 RXpackets: 68754 Errors:0 dropped:0 overruns:0 frame:0dropped:0overruns:0drops:0 0 txqueuelen:1000 RXbytes:16523275 (15.7 MiB) TXbytes: 4286115 (4.0 MiB) eth0:0 Link encap:Ethernet HWaddr 00:0C:29:F1:DD:B2 addr: 172:255.16Mask: 255.16 255.255.0.0 UPBROADCAST RUNNING MULTICAST MTU: 1500 Metric:1 Lo Link encap: Local Loopback inet addr: 127.0.0.1 Mask: 255.0.0.0 inet6 addr:::1/128 Scope:Host: 0 UPLOOPBACK RUNNING MTU: 65536 RXpackets:0 errors:0 dropped:0 overruns:0 frame:0 overruns:0 runcols:0 errors:0 :0 txqueuelen:0 RXbytes:0 (0.0 b) TX bytes:0 (0.0 b)

AddhttpdResources

Addweb and start it, and findwebResources andvipResources are in different nodes. This situation cannot be achievedwebHighly available.

There are two solutions to this situation. One is to define a group resource, and then add these two resources to the same group, and the other is Define constraints on these two groups.

SettingswebHigh availability

Add a group resource to facilitate resource constraints

Set the group name

After building the group, you need to create oneVIPResources

< /span>

AddLsbhttpdResources

Start the resource. Since the two resources are in one group, the location constraint is implemented, and both are in the same group.

Verification resource

[root@node2 ~]# ifconfig eth0:0eth0:0 Link encap:Ethernet HWaddr00:0C:29:F1:DD:B2 inet addr:172.16.4.1 Bcast:172.16.255.255 Mask:255.255.0.0 UPBROADCASTMetric RUNNING: MULTICASTMTU: 1500 ntroot-MTU: 1500 | Grep:80tcp 0 0:::80 0::: * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ... ';">Resource restrictions 

Location constraints

Define which node the resource should run first


< span style="font-size:16px;font-family:'宋体';">definition expression: if the node name isnode1.xmfb.comThen it must run on this node



Order constraints

Define the sequence of resource activation

< br>

Collaborative constraints

By default, resources are equally distributed to two nodes to start. If you want two resources to start on the same node, there are two ways

(1) Define two resources into the same group

(2) Set collaborative constraints


In the restriction conditions, cooperate with adding a new condition

At this timeweb andvip resources are running together

The definition is based onNFSmariadb Cluster

Required resources:

vip

Mariadb

Shared storage (NFS)

NFS) ';">Configuration

NFSConfigurationMysqlShared storage directory

[root@NFS ~]# groupadd -r -g 306 mysql[root@NFS~]# useradd -r -g 306 -u 306 mysql[root@NFS ~]# id mysqluid=306(mysql) gid=306(mysql)=306(mysql)[root@NFS~]# mkdir p/mydata/data[root @NFS~]#chown-R mysql.mysql/mydata/data/[root@NFS~]#vim/etc/exports/mydata/data 172.16.0.0/16(rw,no_root_squash)

Node1 Configuration

[root@node1~]#mkdir/mydata[root @node1~]#groupadd-g306mysql[root@node1~]#useradd-g306-u3306-s/sbin/nologin-M mysql[root@node1~]#mount-tnfs172.16.4.200:/mydata //mydata/

Node2Configuration

 [root@node2~]#mkdir/mydata[root@node2~]#groupadd-g306mysql[root@node2~]#useradd-g306-u3306-s/sbin/nologin-M mysql[root@node2~ ]#Mount -t nfs 172.16.4.200:/mydata//mydata/

Node1Installationmariadb

< p> Install and initialize

tar xfmariadb-5.5.43- linux-x86_64.tar.gz -C/usr/local/cd/usr/local/ln-sv mariadb-5.5.43-linux-x86_64/ mysqlcd mysql/chown-R mysql:mysql.scripts/mysql_install_db--user= mysql--datadir=/mydata/data/

Provide the main configuration file

[root@node1 mysql]# cp support-files/my-large.cnf/etc/my.cnf[root@node1 mysql]# vim/etc/my.cnfthread_concurrency= 2datadir=/mydata/ datainnodb_file_per_table = 1

Provide startup script

 [root@node1 mysql]# cp support-files/mysql.server/etc/rc.d/init.d/mysqld[root@node1 mysql]# chmod +x/etc/rc.d/init.d/mysqld[ root@node1 mysql]# chkconfig--add mysqld[root@node1 mysql]# chkconfig mysqld off

ModifyPATH< span style="font-family:'宋体';">Environmental variables

[root@node1 mysql]#echo"exportPATH=/ usr/local/mysql/bin:$PATH">>/etc/profile.d/mysql.sh[root@node1mysql]#./etc/profile.d/mysql.sh

StartMysqland create a library

[root@node1 mysql]# service mysqld startStarting MySQL... [OK][root@node1mysql]#mysqlMariaDB[(none)]> create database mydb;Query OK, 1 row affected (0.03sec) MariaDB[(none)]>show databases;+-------- ------------+|Database|+--------------------+| information_schema|| mydb||mysql||performance_schema| | Test |+--------------------+5 rows in set (0.07 sec)

StopMysql, uninstallNFS

[root@node1 mysql]# service mysqld stop[root@node1 mysql]# umount/mydata/

Node2Installation mariadb

Node2InstallmariadbNo need to mount< /span>NFS

[root@node2 ~]# umount/mydata/

Installmariadb;Note that no initialization is required here

tar xf mariadb-5.5.43-linux-x86_64.tar.gz -C/usr/local/cd/usr/local/ln-sv mariadb-5.5.43- linux-x86_64/ mysqlcd mysql/chown-R mysql:mysql.

Bnode1Copy the configuration file and startup script tonode2

[root@node1 mysql]# scp/ etc/my.cnf node2:/etc/[root@node1 mysql]# scp /etc/init.d/mysqldnode2:/etc/init.d/mysqld

Node2Add NodemariadbStart script

[root@node2~]#chkconfig--add mysqld[root@node2~]#chkconfig mysqldoff

Add< /span>PATHEnvironmental Variables

[root@node2​]# echo "exportPATH=/usr/local/mysql/bin:$PATH">>/etc/profile.d/mysql.sh[root@node2~]#./etc/profile.d/mysql.sh

MountNFS

[root@node2 ~]#mount-t nfs 172.16.4.200:/mydata//mydata/

Startmariadb and check it atnode1The createdmydbis the database synchronized.

[root@node2 ~]# service mysqld start[root@node2 ~]# mysql MariaDB[(none)]> show databases; +--------------------+|Database|+--------------------+| information_schema| | Mydb || mysql|| performance_schema||test|+--------------------+5 rows in set (0.01sec)

< span style="font-family:'宋体';">Stopmariadbservice, uninstallNFSShare

[root@node2 ~]#service mysqldstop[root@node2~]#umount/ mydata/

Configurationmariadb+NFSCluster

Create a group

650) this.width=650;” src=”/wp-content/uploadshttp:/img.voidcn.com/vcimg /static/loading.png” alt=”” border=”0″ data-src=”http://s3.51cto.com/wyfs02/M00/6E/23/wKiom1V0d7CTeVghAACHXxG2mRw196.jpg”>

Create a virtualVIP

AddNFSFile System


Addmariadb, select heremysqld

p>

StartHA_Mariadbgroup resources, resources automatically run in span>node2Node

< /p>

VerificationmariadbHigh Available clusters

View resource startup

[root@node2~]# ifconfig eth0:0eth0:0 Link encap:Ethernet HWaddr00:0C:29:F1:DD:B2 inet addr:172.16.4.1 Bcast:172.16.4.255 ASTM AST 255.255.255.0 AST M MTU:1500 Metric: 1 [root@node2~]#netstat -lnt|grep 3306tcp 0 0.0.0.0:3306. rep/ 0.0.0.0:* root/mount#200 LIST/grep#17. mydata on/mydata type nfs(rw,vers=4,addr=172.16.4.200,clientaddr=172.16.4.1)

will

will span>node2After the node is set to standby, resources can be transferred tonode1node

Leave a Comment

Your email address will not be published.