RAID1 – MDADM Mirror – Do not perform parallel reading as expected?

We have a three-way RAID 1 mirror driven by mdadm. I think I’ve read that mdadm should take multiple simultaneous read requests and distribute them to different drives in the mirror (parallel Read) to improve the read performance, but when we test and observe the output of iostat -xm 1, it only shows that /dev/sda is being used, even though the I/O of the device is changing from 5 different md devices Saturated.

Am I misunderstanding something? Does mdadm need to be configured differently? Does our version (CentOS 6.7) not support this? I am not sure why it behaves like this.

Benchmark settings-run the following commands at the same time:

dd if=/dev/md2 bs=1048576 of= /dev/null count=25000
dd if=/dev/md3 bs=1048576 of=/dev/null count=25000
dd if=/dev/md4 bs=1048576 of=/dev/null count=25000
dd if=/dev/md5 bs=1048576 of=/dev/null count=25000
dd if=/dev/md6 bs=1048576 of=/dev/null count=25000< /pre>

When those people are observing the output of iostat -xm 1 (sample output included below-the mirror is composed of sda, sdb and sdc).

Device : rrqm/s wrqm/sr/sw/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 100669.00 0.00 10710.00 0.00 435.01 0.00 83.18 33.28 3.11 0.09 100.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md1 0.00 0.00 19872.00 0.00 77.62 0.00 8.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 18272.00 0.00 71.38 0.00 8.00 0.00 0.00 0.00 0.00
md5 0.00 0.00 18272.00 0.00 71.38 0.00 8.00 0.00 0.00 0.00 0.00
md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md6 0.00 0.00 18240.00 0.00 71.25 0.00 8.00 0.00 0.00 0.00 0.00
md4 0.00 0.00 18208.00 0.00 71.12 0.00 8.00 0.00 0.00 0.00 0.00
md3 0.00 0.00 18528.00 0.00 72.38 0.00 8.00 0.00 0.00 0.00 0.00
md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Perform the test again, but change it to perform all five reads on the same MD device (e.g. /dev/md2), and you should see that they are being distributed.

A single read operation can only read from one drive in the mirror. It will start from the first disk assigned to the mirror, which in this case looks like /dev/sda. Since you have configured 5 MD devices, and are performing a single read operation from each device, so they all come from /dev/sda.

I suggest not to configure multiple MD devices, just use a single device that spans the entire SSD.

Alternatively, change your test method to force it to perform tasks for several different drives. Take a look at bonnie++, it is very beautiful.

We There is a three-way RAID 1 mirror driven by mdadm. I think I have read that mdadm should take multiple simultaneous read requests and distribute them to different drives in the mirror (parallelized reads) to improve read performance, but in When we test and observe the output of iostat -xm 1, it only shows that /dev/sda is being used, even though the I/O of that device is being saturated from 5 different md devices.

Am I misunderstanding something? Does mdadm need to be configured differently? Does our version (CentOS 6.7) not support this? I am not sure why it behaves like this.

Benchmark settings-run the following commands at the same time:

dd if=/dev/md2 bs=1048576 of= /dev/null count=25000
dd if=/dev/md3 bs=1048576 of=/dev/null count=25000
dd if=/dev/md4 bs=1048576 of=/dev/null count=25000
dd if=/dev/md5 bs=1048576 of=/dev/null count=25000
dd if=/dev/md6 bs=1048576 of=/dev/null count=25000< /pre>

When those people are observing the output of iostat -xm 1 (sample output included below-the mirror is composed of sda, sdb and sdc).

Device : rrqm/s wrqm/sr/sw/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 100669.00 0.00 10710.00 0.00 435.01 0.00 83.18 33.28 3.11 0.09 100.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md1 0.00 0.0 0 19872.00 0.00 77.62 0.00 8.00 0.00 0.00 0.00 0.00
md2 0.00 0.00 18272.00 0.00 71.38 0.00 8.00 0.00 0.00 0.00 0.00
md5 0.00 0.00 18272.00 0.00 71.38 0.00 8.00 0.00 0.00 0.00 0.00
md7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md6 0.00 0.00 18240.00 0.00 71.25 0.00 8.00 0.00 0.00 0.00 0.00
md4 0.00 0.00 18208.00 0.00 71.12 0.00 8.00 0.00 0.00 0.00 0.00
md3 0.00 0.00 18528.00 0.00 72.38 0.00 8.00 0.00 0.00 0.00 0.00
md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Perform the test again, but change it so that Perform all five reads on one MD device (e.g. /dev/md2), and you should see that they are being distributed.

A single read operation can only read from one drive in the mirror. It will start from the first disk assigned to the mirror, which in this case looks like /dev /sda. Since you have configured 5 MD devices and are performing a single read operation from each device, they all come from /dev/sda.

I recommend not to configure multiple MD devices, only You need to use a single device that spans the entire SSD.

Or, change your test method to force it to perform tasks for several different drives. Take a look at bonnie++, it's very beautiful.

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 1415 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.