IP |
host name |
Function |
10.45. 129.113/24 Extranet 172.16.1.10/24 Intranet< /span> |
rac1 |
RACnode1< /p> |
10.45.129.114/24 wainet 172.16.1.20/24 Intranet |
rac2 |
RACNode2 |
172.16.1.30/24 Intranet address |
iscsi.com (Key to name format! ! ! ) |
ISCSIShared storage |
Release version |
Red Hat Enterprise Linux Server release 7.5 (Maipo) |
Kernel td> |
4.1.12-112.16.4.el7uek.x86_64 |
< p>
hard disk |
RAC1node: STAT 20G RAC2node: STAT 20G ISCSINode: STAT 20G、30G(20G Do the system,30Gdo shared storage) |
Memory |
RAC: 2G RAC2: 2G ISCSI: 1G |
CPU |
All are dual-core td> |
release package |
OracleLinux-R7-U5-Server-x86_64-dvd.iso |
ISCSIPackage used by the node |
Configure hostname andIPCorrespondence
< /p>
Send to all other machines
scp /etc/hosts [email protected]:/etc/
< p>scp /etc/hosts [email protected]:/etc/
Do keyless login
Send to For other nodes, you need to enter the user password for the first time
[[email Protected] ~]# ssh-copy-id [email Protected]
[[email Protected] ~]# ssh -copy-id [email protected]
Be a springboard on the RAC1node, you can batch Perform task (temporary, it will be invalid after restart)
Turn off the firewalls of all nodes
[[emailprotected] ~]# a ssh $a “systemctl stop firewalld”;done
[[emailprotected] ~] # a ssh $a “systemctl disable firewalld”;done
[[email protected] ~]# a ssh $a’iptables -F’;done
Configure time service
Put RAC1The node is used as a time server, so that all nodes are based on the machine’s time
Note: The original time service node in the above figure needs a comment point, and the new red is added Configuration in the box
server 127.127.1.0
fudge 127.127.1.0 stratum 10
restrict 172.16.1.0 mask 255.255.255.0 nomodify notrap
< p>
< p>Other node configurationntp time server
server 172.16.1.10
restrict 172.16.1.10 nomodify notrap noquery
Note: 1.10 YesRAC1Node ’S intranet address
[[emailprotected] ~]# service ntpd start
[[emailprotected] ~]# service ntpd start
Synchronize time of all nodes
Build network storageiscsi (iscsi.com node operation )
Format and partition the specially stored disk
Note: The sdb disk is specially used to store data.
[[email protected] ~]# fdisk /dev/sdb
Enter n–p–Enter3次–pCheck it out –wSave the partition table. Partition complete
Backup configuration files
[[email protected] ~]# cd /etc/iscsi/
[[emailprotected] iscsi]# cp initiatorname.iscsi{,.bak}
[[emailprotected] iscsi]# cp iscsid .conf{,.bak}
Runtargetcli Command to enter CLI mode:
Create a block storage for ISCSI:
Create ISCSItarget
Add other portals with different IP_Port (optional)
Note: createspecified in the commandip_address=xx.xxxxIt will be set to the specifiedIPaddress, not the default0.0.0.0
6、Create an access control list for client computers (< /strong>ACL), which means you need to obtain the ISCSI Initiatorname and map it to the target, After completion, the client computer will be able to connect to theISCSItarget (using the < /span>ISCSIoperate on all nodes of the service)
< p>
7、CreateLUN(logical unit number)
8, create After the completion, you can verify whether the target configuration is correct
9. Save and exit
10. Start target Service
[[emailprotected] iscsi]# systemctl start target
[[emailprotected] iscsi]# systemctl enable target
11, Firewall release (if needed)
firewall-cmd –add-port=3260/tcp –permanent
forewall-cmd –reload
forewall- cmd –list-ports
12、Check whether the server can be found on the client node ISCSITarget
[[email protected] ~]# iscsiadm -m discovery -t st -p 172.16.1.30
13. After finding the target, log in to the target
iscsiadm -m node -T iqn.2019- 10.com.iscsi:target1 -p 172.16.1.30 -l
Note 1: Each machine needs to be executed on the machine to log in. Batch login cannot be used, pending verification! ! !
Note 2:-Trepresents the target name,-lrepresents Log in, in the node mode, it will log in to the specified record and find In the mode, it will log in to all discovered targets
Check if the same disk is hung up
To find out the connectionISCSI The name of the device, as shown below:
Note: If the hard disk is mounted on two machines at the same time, the two machines will not be synchronized in real time after they are successfully mounted, that is, the things stored on node A will not be seen on node B, and vice versa. Note