MDADM RAID5 size

I currently have 9x1TB disks in RAID5, which for me is 8TB of storage space. However, I did not do this at all. This is necessary for migrating from RAID6 to RAID5 and performing After the command to adjust the file system size.

mdadm –detail / dev / md0

/dev/md0:
Version : 1.2
Creation Time: Sun Apr 8 18:20:33 2012
Raid Level: raid5
Array Size: 7804669952 (7443.11 GiB 7991.98 GB)
Used Dev Size: 975583744 ( 930.39 GiB 999.00 GB)
Raid Devices: 9
Total Devices: 9
Persistence: Superblock is persistent

Update Time: Tue Dec 10 10:15:08 2013
State: clean
Active Devices: 9
Working Devices: 9
Failed Devices: 0
Spare Devices: 0

Layout: left -symmetric
Chunk Size: 512K

Name: ares:0 (local to host ares)
UUID: 97b392d0:28dc5cc5:29ca9911:24cefb6b
Events: 995494< br />
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
5 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
9 8 113 5 active sync /dev/sdh1
11 8 97 6 active sync /dev/sdg1
6 8 145 7 active sync /dev/sdj1
10 8 129 8 active sync /dev/sdi1

df -h

Filesystem Size Used Avail Use% Mounted on 
/dev/md0 7.2T 2.1T 4.8T 31% /mnt/raid

This is normal, what should I expect, or what am I doing wrong?

This is an old binary comparison that negates the kilogram/mega/giga/terabyte problem .

Pay attention to this line

Array Size: 7804669952 (7443.11 GiB 7991.98 GB)

So although your The array size is 7991.98 GB, but negative GB is used-almost exactly 8*1TB-using binary GiB, which is 7443.11 GiB. Divide by 2^10 again to get 7.27TiB, and then lose about 1.5% of FS overhead brings us to 7.16TiB, or 7.2 with rounding, is exactly what df reports.

To see a more detailed analysis of similar arrays, including the reason for the “1.5%” number, please read my answer here

I currently have 9x1TB disks in RAID5, which for me is 8TB of storage space. However, I did not do this at all. This is migrating from RAID6 to After RAID5 and execute the necessary commands to adjust the file system size.

mdadm –detail / dev / md0

/dev/md0:< br /> Version: 1.2
Creation Time: Sun Apr 8 18:20:33 2012
Raid Level: raid5
Array Size: 7804669952 (7443.11 GiB 7991.98 GB)
Used Dev Size: 975583744 (930.39 GiB 999.00 GB)
Raid Devices: 9
Total Devices: 9
Persistence: Superblock is persistent

Update Time: Tue Dec 10 10: 15:08 2013
State: clean
Active Devices: 9
Working Devices : 9
Failed Devices: 0
Spare Devices: 0

Layout: left-symmetric
Chunk Size: 512K

Name: ares :0 (local to host ares)
UUID: 97b392d0:28dc5cc5:29ca9911:24cefb6b
Events: 995494

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
5 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
9 8 113 5 active sync /dev/sdh1
11 8 97 6 active sync /dev/sdg1
6 8 145 7 active sync / dev/sdj1
10 8 129 8 active sync /dev/sdi1

df -h

Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.2T 2.1T 4.8T 31% /mnt/raid

This is normal, what should I expect, or what am I doing wrong?

This is an ancient binary comparison that negates the kilogram/mega/gigabit/terabyte problem.

Note This line

Array Size: 7804669952 (7443.11 GiB 7991.98 GB)

So although your array size is 7991.98 GB, you use negative GB – Almost exactly 8 * 1TB-using binary GiB, it is 7443.11 GiB. Divide by 2^10 again to get 7.27TiB, and then lose about 1.5% of FS overhead brings us to 7.16TiB, or 7.2 with rounding, this It is exactly what df reports.

To see a more detailed analysis of similar arrays, including the reason for the “1.5%” number, please read my answer here

< /p>

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 1395 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.