Volume type and configuration details of Glusterfs distributed file system

Blog post outline:
(1) GlusterFS related concepts.
(2) The deployment of various volume types of GlusterFS and the use of client mounts.
(3) GlusterFS maintenance commands.

(1) GlusterFS related concepts:

GlusterFS is an open source distributed file system, and it is also the core of the Scale-Out storage solution Gluster. It has a strong horizontal expansion capability in terms of storing data. GlusterFS is mainly composed of storage server, client and NFS/Samba storage gateway (optional component). The biggest design feature of the GlusterFS architecture is that there is no metadata server component, which means that there is no master/slave server. Each node can be a master server.

1) Gluster related reference documents are as follows (my configuration below is based on local yum configuration, if you need to build the latest version, you can directly configure it according to the document link below):

Gluster official website, install Gluster official document based on centos7/Redhat

2) GlusterFS related terms:

  • Brick (storage block): Refers to the dedicated partition provided by the host for physical storage in the trusted host pool.
  • Volume (logical volume): A logical volume is a collection of a group of bricks. A volume is a logical device for data storage.
  • FUSE: is a kernel module that allows users to create their own file system without modifying the kernel code.
  • Glusterd (background management process): Run on every node in the storage cluster.
  • VFS: The interface provided by the kernel space to the user space to access the disk.

3) GlusterFS volume type:

  • Distributed volume: equivalent to Windows The spanned volume only expands the disk space and is not fault-tolerant;
  • Striped volume: equivalent to the striped volume in Windows, belonging to RAID 0 level, a file will be on multiple disks For reading and writing, the larger the file, the higher the reading and writing efficiency, but it is not fault-tolerant;
  • Copy volume: equivalent to a mirrored volume in Windows, belonging to the RAID 1 level, with fault tolerance and high read performance , But the write performance is reduced because the same file needs to be written to multiple bricks simultaneously.
  • Distributed striped volume: The number of brick servers is a multiple of the number of strips (the number of bricks distributed in data blocks), which has the characteristics of both distributed volumes and striped volumes.
  • Distributed replicated volumes: The number of brick servers is a multiple of the number of mirrors (the number of data copies), which has the characteristics of both distributed volumes and replicated volumes.
  • Striped replicated volume: Similar to RAID 10, it also has the characteristics of a striped volume and replicated volume.
  • Distributed striped replication volume: A composite volume of three basic volumes, usually used for map reduce applications.

Among the above several types of volumes, some of them may not be understood thoroughly, but it doesn’t matter. In a production environment, most companies consider disk utilization and use RAID5. , Or RAID 10. For the configuration of RAID 5 volumes, please refer to: GlusterFS Dispersed Volume (error correction volume) summary.

4) The following introduces the characteristics of some GlusterFS volume types (excluding RAID5):

1. Distributed volume (similar to Windows Span volume):

Distributed volume is the default volume of GlusterFS. When creating a volume, the default option is to create a distributed volume. In this mode, the file is not divided into blocks, and the file is directly stored on a server node.

Distributed volumes have the following characteristics:

  • Files are distributed on different servers and do not have redundancy.
  • It is easier and cheaper to expand the size of a volume.
  • A single point of failure can cause data loss.
  • Rely on the underlying data protection.

2, striped volume (similar to striped volume in Windows, also called RAID 0):

The stripe mode is equivalent to RAID 0. In this mode, the file is divided into N blocks (N stripe nodes) according to the offset, and stored in each Brick Server node in a poll. The node stores each data block as an ordinary file in the local file system, and records the total number of blocks and the serial number of each block through extended attributes. The number of stripes specified during configuration must be equal to the number of storage servers contained in the brick in the volume. The performance is particularly outstanding when storing large files, but it does not have redundancy.

Striped volumes have the following characteristics:
  • Data is divided into smaller pieces and distributed to different striped areas in the block server group.
  • The distribution reduces the load and the smaller files speed up the access speed.
  • There is no data redundancy.

3. Copy volume (similar to mirrored volume in Windows, also called RAID 1)

Copy mode , That is, one or more copies of the same file are saved, and the same content and directory structure are saved on each node. The copy mode needs to save the copy, so the disk utilization is low. If the storage space on multiple nodes is inconsistent, the capacity of the lowest node will be taken as the total capacity of the volume according to the barrel effect. The replicated volume is redundant, even if one node is damaged, it will not affect the normal use of data.

Replicating a volume has the following characteristics:
  • All servers in the volume store a complete copy.
  • The number of copies of the volume can be determined by the customer when it is created.
  • There are at least two block servers or more servers.
  • Redundancy.

4. Distributed replicated volume (also known as RAID 10):

Distributed replicated volume takes account of distributed The functions of volume and copy volume are mainly used when redundancy is required.

(2) Deployment of various volume types of GlusterFS and client mount usage:

My environment here is as follows:

Detailed explanation of volume types and configuration of GlusterFS distributed file system

Server related information:

GlusterFS distributed file system volume types and configuration details

Disk related information:

GlusterFS distributed file system volume Type and configuration details

1. Preparation before deployment:
1. Perform the following operations on all nodes: Add disks according to the above table, partition by fdisk command, format mkfs, create corresponding mount directories, and mount the formatted disks to In the corresponding directory, finally modify the /etc/fstab configuration file to make it permanently mounted. For the specific configuration, please refer to my previous blog post: Centos 7.3 Create, Mount and Unmount (including automatic mounting) file system. (My main purpose here is to make a related note, so the disk size is not subject to the actual environment, you can partition the disk according to your actual environment)

2, configure the firewall and selinux by yourself, I am here for convenience , Closed directly.

3. Download the local yum warehouse I provided and upload it to each node server.

Second, start deployment:

1, node1 configuration is as follows:

[[emailprotected ] ~]# vim /etc/hosts #Write the last four lines in order to add the resolution of 4 nodes
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 node1
192.168.1.2 node2
192.168.1.3 node3
192.168.1.4 node4
[[emailprotected] ~]# mount /dev /cdrom /media #Mount the yum warehouse I provided
mount: /dev/sr0 is write-protected and will be mounted as read-only
[[emailprotected] ~]# rm -rf /etc/ yum.repos.d/* #Delete or remove the original yum configuration file
[[emailprotected] ~]# yum clean all #Clear yum cache
[[emailprotected] ~]# vim / etc/yum.repos.d/a.repo #Edit yum configuration file and write the following content
[fd]
baseurl=file:///media
gpgcheck=0
#After writing the above three lines, save and exit.
[[email Protected] ~]# yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
#Install GlusterFS software
[[emailprotected] ~]# systemctl start glusterd # Start the service
[[email protected] ~]# systemctl enable glusterd #Set to start automatically after booting

2, node2 configuration is as follows:

[[emailprotected] ~]# scp [emailprotected]:/etc/hosts /etc/ #Copy the hosts configuration file of node1 over
The authenticity of host 192.168.1.1 (192.168.1.1)cat be established .
ECDSA key fingerprint is SHA256:BS+lKMN05pYF3F1XeIYU69VnHjzKBiBiMZ1SDKgsxxs.
ECDSA key fingerprint is MD5:ba:0b:a7:47:55:01:6f:41:41:5f:ee:b8:88 :bf:7a:60.
Are you sure you want to continue connecting (yes/no)? yes #Enter "yes"
Warning: Permanently added '192.168.1.1' (ECDSA) to the list of known hosts.
[email protected] password: #Enter the password of the other party’s username
hosts 100% 230 286.9KB/s 00:00
[[emailprotected] ~]# rm- rf /etc/yum.repos.d/* #Delete Or remove the original yum configuration file
[[emailprotected] ~]# yum clean all #Clear yum cache
[[emailprotected] ~]# scp [emailprotected]:/etc/yum. repos.d/a.repo /etc/yum.repos.d/
#Copy the yum file of node1 over
[email protected] password:
a.repo 100% 38 31.1KB /s 00:00
[[email protected] ~]# mount /dev/cdrom /media #mount the yum warehouse I provide
mount: /dev/sr0 is write-protected, it will be read-only Mounting
[[emailprotected] ~]# yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
#Install GlusterFS software
[[emailprotected] ~]# systemctl start glusterd #Start the service
[[email Protected] ~]# systemctl enable glusterd #Set to start automatically after booting

At this point, the configuration of node2 is complete. Repeat the configuration of node2 for node3 and node4. I won’t write this, please configure by yourself.

3. Add a node (all the following configuration can be executed on any node, I will execute it on node1 here):

 [[email Protected] ~]# gluster peer probe node1 #Add node1, so the prompt can not be added
peer probe: success. Probe on localhost not needed
[[emailprotected] ~]# gluster peer probe node2 #Add node2
peer probe: success.
[[emailprotected] ~]# gluster peer probe node3 #Add node3
peer probe: success.
[[emailprotected] ~]# gluster peer probe node4 #Add node4
[[email protected] ~]# gluster peer status #View cluster status
Number of Peers: 3

Hostname: node2< br />Uuid: d733aa7c-5078-43b2-9e74-6673f3aaa16e
State: Peer in Cluster (Connected) #If a node shows Disconnected, please check the hosts configuration file

Hostname: node3
Uuid: dc64b6c6-ce2d-41d3-b78b-56f46038ab52
State: Peer in Cluster (Connected)

Hostname: node4
Uuid: 926b51e9-4599-4fe8- ad2b-11f53a2ffb5a
State: Peer in Cluster (Connected)

4. Create various types of volumes

(1 ) Create a distributed volume:

[[email Protected] ~]# gluster volume create dis-volume node1:/e6 node2:/e6 force
#Create a distributed volume, Among them, "dis-volume" is the volume name, and no type is specified. The distributed volume is created by default.
volume create: dis-volume: success: please start the volume to access data
[[email protected] ~]# gluster volume info dis-volume #View information about the volume

Volume Name: dis-volume
Type: Distribute
Volume ID: 2552ea18-b8f4-4a28-b411-a5b1bd168009
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/e6
Brick2: node2:/e6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[[emailprotected] ~]# gluster volume start dis-volume #Enable the volume
volume start: dis-volume: success

(2) Create stripe volume:

[[emailprotected] ~]# gluster volume create stripe-volume stripe 2 node1:/d5 node2:/d5 force
#Create a striped volume, and specify the number of striped volumes as 2. "Stripe-volume" is the volume name
#The specified type is stripe, the value is 2, and two brick servers are followed, so the striped volume is created.
volume create: stripe-volume: success : please start the volume to access data
[[email Protected] ~]# gluster volume info stripe-volume #View the volume related information

Volume Name: stripe-volume
Type: Stripe #Volume type is strip
Volume ID: c38107e9-9d92-4f37-a345-92568c2c9e9a
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/d5
Brick2: node2:/d5
Options Reconfigured:
transport .address-family: inet
nfs.disable: on
[[emailprotected] ~]# gluster volume start stripe-volume #Enable the volume
volume start: stripe-volume: success< /pre>

(3) Create a replicated volume:

[[emailprotected] ~]# gluster volume create rep-volume replica 2 node3:/d5 node4:/ d5 force 
#The specified type is "replica", the value is "2", and two brick servers are followed, so the created volume is a replicated volume.
volume create: rep-volume: success: please start the volume to access data
[[email Protected] ~]# gluster volume info rep-volume #View the information about the volume

Volume Name: rep-volume
Type: Replicate #The volume type is replication
Volume ID: 03553b49-c5fa-4a5f-8d66-8c229e617696
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport- type: tcp
Bricks:
Brick1: node3:/d5
Brick2: node4:/d5
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[[email Protected] ~]# gluster volume start rep-volume #Enable the volume
volume start: rep-volume: success

(4) Create a distributed stripe volume:

[[emailprotected] ~]# gluster volume create dis-stripe stripe 2 node1:/b3 node2:/b3 node3:/b3 node4 :/b3 force
#The specified type is stripe, the value is 2, and four brick servers are followed, so the distributed stripe volume is created.
volume create: dis-stripe: success: please start the volume to access data
[[email protected] ~]# gluster volume info dis-stripe #View the volume related information

Volume Name: dis-stripe
Type: Distr ibuted-Stripe #Volume type is distributed + stripe
Volume ID: 059ee6e3-317a-4e47-bf92-47d88e3acf3c
Status: Created
Snapshot Count: 0
Number of Bricks : 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/b3
Brick2: node2:/b3
Brick3: node3:/ b3
Brick4: node4:/b3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[[emailprotected] ~]# gluster volume start dis-stripe #Enable the volume
volume start: dis-stripe: success

8. Create a distributed replication volume:

[[email Protected] ~]# gluster volume create dis-rep replica 2 node1:/c4 node2:/c4 node3:/c4 node4:/c4 force
#The specified type is replica, the value is 2, and the following Followed by 4 brick servers, which is twice as many as 2, so the volume created is a distributed replication volume
volume create: dis-rep: success: please start the volume to access data
[[emailprotected] ~]# gluster volume info dis-rep #View the information about the volume

Volume Name: dis-rep
Type: Distributed-Replicate #The volume type is distributed+replicate
Volume ID: 9e702694-92c 7-4a3a-88d2-dcf9ddad741c
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/c4
Brick2: node2:/c4
Brick3: node3:/c4
Brick4: node4:/c4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[[emailprotected] ~]# gluster volume start dis-rep #Enable the volume
volume start: dis-rep: success

OK, now the volume involved has been created and can be mounted on the client:

5. Deploy the Gluster client:

(1) Deployment and installation:

[[emailprotected] ~]# rm -rf /etc/yum.repos.d/* # Delete or remove the original yum configuration file
[[emailprotected] ~]# yum clean all #Clear yum cache
[[emailprotected] ~]# scp [emailprotected]:/etc/yum .repos.d/a.repo /etc/yum.repos.d/
#Copy the yum file of node1 over
[email protected] password:
a.repo 100% 38 31.1 KB/s 00:00
[[email protected] ~]# mount /dev/cdrom /media #mount the yum warehouse I provided
mount: /dev/sr0 is write-protected, it will be read-only Way to mount
[[email Protected] ~]# yum -y install glusterfs glusterfs-fuse #Install the GlusterFS software required by the client
[[email Protected] ~]# mkdir -p /test/{dis, stripe,rep,dis_and_stripe,dis_and_rep} #Create a mount directory
[[emailprotected] ~]# ls /test #Check whether the mount directory is created
dis dis_and_rep dis_and_stripe rep stripe
[[ email protected] ~]# scp [email protected]:/etc/hosts /etc/
#client also needs to resolve the node server, so copy the hosts file of host 1.1
[email protected] s password : #Enter the user password of the peer server
hosts 100% 230 0.2KB/s 00:00

(2) Mount the Gluster file system:

[[email Protected] ~]# mount -t glusterfs node1:dis-volume /test/dis
[[email Protected] ~]# mount -t glusterfs node2:stripe-volume /test/stripe< br />[[email Protected] ~]# mount -t glusterfs node3:rep-volume /test/rep
[[email Protected] ~]# mount -t glusterfs node4:dis-stripe /test/dis_and_stripe< br />[[email Protected] ~]# mount -t glusterfs node1:dis-rep /test/dis_an d_rep
#If the mounting is unsuccessful, please check the hosts file resolution. When mounting, you can specify any host in the logical storage volume.
#Because all GlusterFS configuration information is shared among the nodes, this also avoids node1 failure and cannot use other
#volumes problem.

(3) Modify the fstab configuration file to automatically mount at boot:

[[email protected] ~]# vim /etc/fstab #Write the following at the end of the file Line
node2:stripe-volume /test/stripe glusterfs defaults,_netdev 0 0
node3:rep-volume /test/rep glusterfs defaults,_netdev 0 0
node4:dis-stripe /test /dis_and_stripe glusterfs defaults,_netdev 0 0
node1:dis-rep /test/dis_and_rep glusterfs defaults,_netdev 0 0
node1:dis-volume /test/dis glusterfs defaults,_netdev 0 0

When setting up automatic mounting, you must have manually mounted the relevant directory before you can achieve automatic mounting. Although you can specify any node when manually mounting, it is recommended to write to /etc/fstab. Write to which node is specified at the time of mounting.

(3) GlusterFS maintenance command:

[[emailprotected] ~]# gluster volume list #View volume list
dis-rep
dis-stripe
dis-volume
rep-volume
stripe-volume
[[emailprotected] ~]# gluster volume info #View all volume information< br />Volume Name: dis-rep
Type: Distributed-Replicate
Volume ID: 9e702694-92c7-4a3a-88d2-dcf9ddad741c
Status: Started
Snapshot Count: 0< br />Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/c4
Brick2: node2:/c4
Brick3: node3:/c4
Brick4: node4:/c4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
.. ..................... #Omit part of the content
[[email protected] ~]# gluster volume status #View the status of the volume
Status of volume: dis-rep
Gluster process TCP Port RDMA Port Online Pid
---------------------------- --------------------------------------------------
Brick node1:/c4 49155 0 Y 11838
Brick node2:/c4 49155 0 Y 12397
Brick node3:/c4 49154 0 Y 12707
Brick node4:/c4 49154 0 Y 12978
Self-heal Daemon on localhost N/AN/AY 11858
Self-heal Daemon on node4 N/AN/AY 12998
Self-heal Daemon on node2 N/AN/AY 12417
Self-heal Daemon on node3 N/AN/AY 12728
............................

[[email protected] ~]# gluster volume stop dis-stripe #Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: dis-stripe: success
[[email Protected] ~]# gluster volume delete dis-stripe #Delete a volume
Deleting volume will erase a ll information abe. Do you want to continue? (y/n) y
volume delete: dis-stripe: success
[[emailprotected] ~]# gluster volume set dis-rep auth.allow 192.168 .1.*,10.1.1.*
#Set to only allow clients in a specific network segment to access the volume dis-rep
volume set: success

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 2855 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.