When RAID 10 is lower than RAID 1, why?

We launched the new Dell 2950 with PERC 6/e and 14 external SAS 15K 73GB hard drives. It takes 3 hours to run Oracle 11g database jobs to set the drives as hardware RAID 10 (stripe across 7 mirror pairs). The database size is about 26GB. The same job running on only two drives in RAID 1 takes only 1 hour. The operating system is Win 2008 R2.

p>

Before we change the RAID level on the production box (quite long downtime), does anyone know why we are seeing this strange result, and is there a better way to fix it?

Add information

PERC 6/e should run the latest firmware and cache battery OK.

Finally, the true story

After the DBA spoke, my face blushed. It turns out that RAID 1 is 7 RAID 1 volumes, each with two drives. Data tables and indexes are allocated to each volume to minimize contention. Obviously, a good DBA can 14 drives get more performance than a RAID 10 controller, regardless of file access patterns. Some SANs claim to migrate files intelligently to improve performance, but if there will be a bake soon, then my money is in Our DBA is up!

I think user71281 means that your RAID controller (or driver) is messing up. When you complete the RAID setup of the controller (or driver), the RAID10 setup will never be slower than the simple RAID1.

Your RAID solution allows you to set up a very inefficient RAID10 array, Or you found a mistake.
Maybe the performance of the 8th pair has improved? Or when you reduce the setting to 4 pairs? The last option may mean that you have to upgrade to a 146GB disk.

But I first check the firmware update, and then check how much RAM is on the RAID card. It did not turn off its caching function because of the dead BBU (battery backup) Unit), isn’t it?

We launched the new Dell 2950 with PERC 6/e and 14 external SAS 15K 73GB hard drives. It takes 3 hours to run the Oracle 11g database job to remove the drive Set to hardware RAID 10 (stripe across 7 mirrored pairs). The database size is about 26GB. The same job running on only two drives in RAID 1 takes only 1 hour. The operating system is Win 2008 R2. < p>

Before we change the RAID level on the production box (quite long downtime), does anyone know why we are seeing this strange result and if there is a better way to fix it it?

Add information

PERC 6/e should run the latest firmware and cache battery OK.

Finally, the true story

After the DBA spoke, my face blushed. It turns out that RAID 1 is 7 RAID 1 volumes, each with two drives. Data tables and indexes are allocated to each volume to minimize contention. Obviously, a good DBA can 14 drives get more performance than a RAID 10 controller, regardless of file access patterns. Some SANs claim to migrate files intelligently to improve performance, but if there will be a bake soon, then my money is in Our DBA is up!

I think user71281 means that your RAID controller (or driver) is messing up. When you finish the RAID setting of the controller (or driver) When, RAID10 setup will never be slower than simple RAID1.

Your RAID solution allows you to set up a very inefficient RAID10 array, or you have found an error.
Maybe the first The performance of the 8 pairs has improved? Or when you reduce the setting to 4 pairs? The last option may mean that you have to upgrade to a 146GB disk.

But I first check the firmware update, and then check how much RAM is on the RAID card. It did not turn off its caching function because of the dead BBU (battery backup) Unit), isn’t it?

Leave a Comment

Your email address will not be published.