Please note: This system is designed to use RAID 10, so Losing half of the original storage capacity is not a problem. Sorry, I didn’t mention it at first. What we care more about is whether we want to use a large RAID 1 0 that contains all 14 disks, or use several smaller RAID 1 0, Then use LVM to strip them. I know the best practice for higher RAID levels is to never use more than 6 disks in the array.
6 Disk Raid 1+0
< p>The Smart Array controller configured in RAID 10 is a stripe across mirrored pairs. Depending on how you install the drive cage and the controller you are using, disks may be paired across controller channels.
For example. In a 4-disk setting:
Logical Drive: 1
Size: 558.7 GB
Fault Tolerance: RAID 1+0
Logical Drive Label: AB3E858350123456789ABCDE6EEF
Mirror Group 0:
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
Mirror Group 1:
physicaldrive 1I:1:3 (port 1I:box 1:bay 3 , SAS, 300 GB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 300 GB, OK)
physicaldriv e 1I: 1: 1 pair of physical drives 1I: 1: 3 physical drive 1I: 1: 2 pairs of physical drives 1I: 1: 4
With so many disks, leaving them in one logical drive is not Any disadvantages. For sequential workloads and increased random workload capacity, you will get more (MOAR) spindle benefits. I recommend adjusting the controller cache to bias writes (low latency), and possibly correct at the operating system level Filesystem choice (XFS!), I/O elevator (deadline) and block device tuning make some choices.
Which operating system distribution will this run?
Hope this is a simple question. Now we are deploying servers that will be used as data warehouses. I know the best practice for RAID 5 is that each RAID 5 has 6 Disks. However, our plan is to use RAID 10 (performance and security). We have a total of 14 disks (actually 16 but there are two for the operating system). Remember, performance is a very Big question, which is better-do a few RAID 1? Do a large RAID 10? A big RAID 10 is our initial plan, but I want to know if anyone has any comments that I didn’t think of.
Please note: This system is designed to use RAID 10, so Losing half of the original storage capacity is not a problem. Sorry, I didn’t mention it at first. What we care more about is whether we want to use a large RAID 1 0 that contains all 14 disks, or use several smaller RAID 1 0, Then use LVM to strip them. I know the best practice for higher RAID levels is to never use more than 6 disks in the array.
Check out this discussion to detail the disk layout of the RAID 10 settings on the HP ProLiant server:
6 Disk Raid 1+0
Smart Array control configured in RAID 10 The drive is a stripe across mirrored pairs. Depending on how you install the drive cage and the controller you are using, the disks may be paired across controller channels.
For example. In a 4-disk setup:< /p>
Logical Drive: 1
Size: 558.7 GB
Fault Tolerance: RAID 1+0
Logical Drive Label: AB3E858350123456789ABCDE6EEF< br /> Mirror Group 0:
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1 :bay 2, SAS, 300 GB, OK)
Mirror Group 1:
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 300 GB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 300 GB, OK)
physicaldrive 1I:1:1 pair of physical drive 1I:1:3 physical drive 1I:1: 2 pairs of physical drives 1I: 1:4
Use so many disks, leave them There are no disadvantages in a logical drive. For sequential workloads and increased random workload capacity, you will get more (MOAR) spindle benefits. I recommend adjusting the controller cache to bias writes (low latency), and It is possible to make some choices on filesystem choice (XFS!), I/O elevator (deadline) and block device tuning at the operating system level.
Which operating system distribution will this run?