I have :
> Linux software RAID5 on 4x4TB enterprise-class hard drives
> LVM has several volumes on the top
>The most important storage volume, 10TB XFS
>Used in Debian Wheezy Default parameters for all settings
>The volume is installed with the option’noatime, nodiratime, allocsize = 2m’
>About 8GB RAM is free and used for caching. I guess, quad-core Intel CPU and HT are not very commonly used
This volume mainly stores about 10 million files (up to 20M in the future), between 100K and 2M. This is the file size range (in K as the unit) and a more precise distribution of numbers within the range:
4 6162
8 32
32 55
64 11577
128 7700
256 7610
512 555
1024 5876
2048 1841
4096 12251
8192 4981
16384 8255
32768 20068
65536 35464
131072 591115
262144 3411530
524288 4818746
1048576 413779
2097152 20333
4194304 72
8388608 43
16777216 21
These files are mostly stored in Level 7 on the volume, such as:
/volume/data/customer/year/month/day/variant/file
In these folders Usually there are ~1KK files, sometimes less, other times up to 5-10K (in rare cases).
I/O is not that heavy, but when I push it again I will encounter hangs. For example:
>The application that performs most I/O is NGINX, which is used for Read and write
>There are some random reads of 1-2MB/s TOTAL
>I have some folders where data is continuously written to TOTAL at a speed of 1-2MB/s and should be deleted from the folder periodically All files over 1h
Run once an hour, the following cron will pause on the entire server for a few seconds, and may even interrupt the service (write new data) when the generated I/O timeout:
find /volume/data/customer/ -type f -iname "*.ext" -mmin +60 -delete
find /volume/data/customer -type d- empty -delete
When writing a file within the above range, I also observed that the writing speed is slow (a few MB/s). When writing a larger file, it will run until it is written The cache fills up (obviously), then the speed drops and starts to hang the server.
Now, I am looking for a solution to optimize my storage performance because I am sure I am not optimal by default , And many things may be improved.
Although it is not that useful to me, if it does not provide significant gains, I will not discard LVM, because although possible, I will not reinstall the whole by deleting LVM Server.
Read a lot about XFS vs. ReiserFS and Ext4, but I am confused.
My other server is in a much smaller RAID1 2TB volume, but with exactly the same settings and Quite heavy workloads are executed flawlessly.
Any ideas?
How can I debug/experiment?
Thank you.
To improve deletion performance, please try the following:
>Use deadline I/O scheduler instead of (default )cfq
>Use logbsize = 256k, allocsize = 64k as mount options (except nodiratime, noatime)
To reduce the impact of deletion on other system activities, please try ionice -c 2 -n 7Run the search script
Report your results!
Although I have browsed some questions here, I think that every situation is different and may require completely different solutions.
What I have now:
> Linux software RAID5 on a 4x4TB enterprise hard drive
>LVM has several volumes on the top
>The most important storage volume, 10TB XFS
>in Debian Wheezy uses default parameters for all settings
>The volume is installed with the option’noatime, nodiratime, allocsize=2m’
>About 8GB RAM is free and used for caching. I guess, quad-core Intel CPU and HT are not very commonly used
The volume mainly stores about 10 million files (up to 20M in the future), between 100K and 2M. This is the file size range (in K as the unit) and a more precise distribution of the numbers within the range:
4 6162
8 32
32 55
64 11577
128 7700
256 7610
512 555
1024 5876
2048 1841
4096 12251
8192 4981
16384 8255
32768 20068
65536 35464
131072 591115
262144 3411530
524288 4818746
1048576 413779
2097152 20333
4194304 72
8388608 43
16777216 21
These Most of the files are stored on the 7th level of the volume, such as:
/volume/data/customer/year/month/day/variant/file
These folders usually have ~1KK files, sometimes less, and other times up to 5-10K (in rare cases).
I/O is not that heavy, but When I push it again, I will encounter hangs. For example:
>The application that performs most I/O is NGINX, which is used for reading and writing.
>There are some random reads 1- 2MB/s TOTAL
>I have some folders where data is continuously written to TOTAL at a speed of 1-2MB/s, and all files over 1h should be deleted from the folder regularly
Running the following cron once an hour will pause the entire server for a few seconds, and may even interrupt the service (write new data) when the generation I/O timeout:
find / volume/data/customer/ -type f -iname "*.ext" -mmin +60 -delete
find /volume/data/customer -type d -empty -delete
In the above range When writing files inside, I have also observed that the writing speed is slow (a few MB/s). When writing larger files, it will keep running until the write cache is filled (obviously), then the speed drops and starts to hang Server.
Now, I am looking for a solution to optimize my storage performance, because I am sure that I am not optimal by default, and many things may be improved.
Although Not that useful to me, but if it does not provide significant gains, I will not discard LVM, because although possible, I will not reinstall the entire server by deleting LVM.
Read a lot about XFS vs. ReiserFS and Ext4 content, but I am confused.
My other servers are in a much smaller RAID1 2TB volume, but the exact same settings and fairly heavy workloads perform flawlessly.
Any ideas?
How can I debug/experiment?
Thank you.
First of all, if the scene XFS is the right choice for this type: it is almost impossible to break away from the inode using XFS.
To improve delete performance, please try the following:
>Use deadline I/O scheduler instead of (default) cfq
>use logbsize = 256k,allocsize = 64k as a mount option (except nodiratime, noatime)
To reduce the impact of deletion on other system activities, please try to run the search script with ionice -c 2 -n 7
Report Your result!