HP P410 RAID Samsung 830 SSD Debian 6.0 – What performance is expected?

I am renting two dedicated servers from a hosting company. Here are the specifications:

server1:
HP ProLiant DL165 G7
2x AMD Opteron 6164 HE 12-Core
40 GB RAM
HP Smart Array P410 RAID controller
2x Samsung 830 256 GB SSD

server2:
HP ProLiant DL120 G7
Intel Xeon E3-1270
16 GB RAM
HP Smart Array P410 RAID controller
2x Samsung 830 128 GB SSD

The installation procedures on the two servers are the same:

> Debian 6.0.
>No swap.
>The file system uses ext3, and there is no special mounting option (only rw), I’m pretty sure if the partitions are aligned correctly.
>Use noop scheduler.
> RAID 1.
>The RAID controller has BBU.
>Drive Write Cache is enabled in the RAID controller.
>The read/write cache ratio on the two RAID controllers is 25%/75%.

I am currently trying to figure out how to start with sequential read/write and make the most of the disks in these servers. The following is the speed I currently see:

Writes:
server1:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv =fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.05089 s, 213 MB/s

server2 :~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
10 24+0 records out
1073741824 bytes (1.1 GB) copied, 4.09768 s, 262 MB/s

Reads:
server1:~# echo 3> /proc/sys/ vm/drop_caches
server1:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.14051 s, 259 MB/s

server2:~# echo 3> /proc/sys/vm/drop_caches
server2:~# dd if=tempfile of =/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.33901 s, 322 MB/s

First of all, can anyone explain the huge difference between these servers?

Second, should I expect more? When I read about the Samsung 830 SSD I have seen
write speeds of over 300 MB/s and read speeds of over 500 MB/s use the same benchmark method (dd). But then there is no RAID controller involved . Is the penalty for RAID high or is it a configuration problem?

Update:

I have used iozone instead of dd for some tests, and the results I got are more meaningful. There is not much difference between the two servers (server1 is now slightly Faster), and I'm very close to the rated speed on these drives. So I don't think I should use what I learned from dd!

I will start with noop with nr_requests and read_ahead_kb set to the default values ​​(128 and 128). Setting read_ahead_kb higher seems to have a big impact on random read performance on server2. Hope I have Time to re-examine, when I use the server in production for a period of time, I will understand the usage pattern more clearly.

There is a lot of content here.

If you want higher performance (from maximum impact to minimum impact):

>Add another pair of disks And expand to RAID 1 0. This will provide the greatest benefit.
>Adjust the file system (noatime, journal mode, delete write barriers, etc.) and/or move to a higher performance file system, such as XFS or even ext4.
>Back to the deadline elevator. Under actual workload, it performs better than noop scheduler.
>Upgrade the firmware of HP Smart Array P410 controllers (and server)
>Consider some more advanced tuning techniques.
>Improve your benchmarking technique. dd is not a suitable way to measure I/O performance. Try dedicated applications such as iozone, bonnie++, etc., and adjust them to your desired read/write mode.
>For pure sequential read/write, regular SAS drives are not too bad choice...

As far as compatibility is concerned, I often use non-HP disks with HP RAID controllers and servers. Sometimes, things Doesn't work, but if your SSD is connected, reports proper temperature and does not show any errors in the HP Array Configuration Utility, then you are fine.

You are using HP Management Agents on the server, not NS?

Edit:

I run the same function on one of my systems, with the same controller, with four SATA SSDs, tuning XFS, shutting off elevators, etc.

p>

[root@Kitteh /data/tmp]# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.1985 s, 336 MB/s

[root@Kitteh /data/tmp]# echo 3 > /proc/sys/vm/drop_caches
[root@Kitteh /data/tmp]# ll
total 1048576
-rw-r--r-- 1 root root 1073741824 Sep 24 14 :01 tempfile

[root@Kitteh /data/tmp]# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.60432 s, 669 MB/s

I am renting two dedicated servers from a hosting company. Here are the specifications:

server1:
HP ProLiant DL165 G7
2x AMD Opteron 6164 HE 12-Core
40 GB RAM
HP Smart Array P410 RAID controller
2x Samsung 830 256 GB SSD

server2:
HP ProLiant DL120 G7
Intel Xeon E3-1270
16 GB RAM
HP Smart Array P410 RAID co ntroller
2x Samsung 830 128 GB SSD

The installation procedure is the same on both servers:

> Debian 6.0.
>No exchange.
>file The system uses ext3, there is no special mount option (only rw), I am pretty sure whether the partitions are aligned correctly.
>Use the noop scheduler.
> RAID 1.
> The RAID controller has BBU.< br>> Drive Write Cache is enabled in the RAID controller.
>The read/write cache ratio on the two RAID controllers is 25%/75%.

I am currently trying to figure out how Start with sequential read/write and make full use of the disks in these servers. Here are the speeds I currently see:

Writes:
server1:~# dd if =/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.05089 s, 213 MB/s

server2:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.09768 s, 262 MB/s

Reads:
server1:~# echo 3> /proc/ sys/vm/drop_caches
server1:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.14051 s, 259 MB/s

server2:~# echo 3> /proc/sys/vm/d rop_caches
server2:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes ( 1.1 GB) copied, 3.33901 s, 322 MB/s

First of all, can anyone explain the huge difference between these servers?

Second, should I expect more? When I read about the Samsung 830 SSD I have seen
write speeds of over 300 MB/s and read speeds of over 500 MB/s use the same benchmark method (dd). But then there is no RAID controller involved . Is the penalty for RAID high or is it a configuration problem?

Update:

I have used iozone instead of dd for some tests, and the results I got are more meaningful. There is not much difference between the two servers (server1 is now slightly Faster), and I'm very close to the rated speed on these drives. So I don't think I should use what I learned from dd!

I will start with noop with nr_requests and read_ahead_kb set to the default values ​​(128 and 128). Setting read_ahead_kb higher seems to have a big impact on random read performance on server2. Hope I have Time to re-examine, when I use the server in production for a period of time, I will understand the usage pattern more clearly.

There is a lot of content here.

If you want higher performance (from maximum impact to minimum impact):

>Add another pair of disks and expand to RAID 1 0. This will provide the greatest Benefits.
>Adjust the file system (noatime, journal mode, delete write barriers, etc.) and/or move to a higher performance file system, such as XFS or even ext4.
>Back to the deadline elevator. In actual work Under load, it performs better than noop scheduler.
>Upgrade the firmware of HP Smart Array P410 controllers (and server)
>Consider some more advanced tuning techniques.
>Improve your benchmarking techniques. dd Not a suitable way to measure I/O performance. Try dedicated applications such as iozone, bonnie++, etc., and adjust them to the read/write mode you want.
>For pure sequential read/write, regular SAS drives It’s not a bad choice either...

As far as compatibility is concerned, I often use non-HP disks with HP RAID controllers and servers. Sometimes, things don’t work, but if your SSD is connected, reports proper temperature and no errors are displayed in the HP array configuration utility, then you are fine.

You use HP Management Agents on the server, don't you?

Edit:

I run the same function on one of my systems, with the same controller, with four SATA SSDs, tuning XFS, shutting off elevators, etc.

p>

[root@Kitteh /data/tmp]# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.1985 s, 336 MB/s

[root@Kitteh /data/tmp]# echo 3 > /proc/sys/vm/drop_caches
[root@Kitteh /data/tmp]# ll
total 1048576
-rw-r--r-- 1 root root 1073741824 Sep 24 14 :01 tempfile

[root@Kitteh /data/tmp]# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.60432 s, 669 MB/s

Leave a Comment

Your email address will not be published.