[HOWTO] How to increase raid rebuild speed
- schumaku
- Guru
- Posts: 43578
- Joined: Mon Jan 21, 2008 4:41 pm
- Location: Kloten (Zurich), Switzerland -- Skype: schumaku
- Contact:
Re: [HOWTO] How to increase raid rebuild speed
Well, all new NAS, no data? Crazy idea in my opinion not to start using the intended storage layout, and use Expand Capacity (Online RAID Capacity Expansion) at this stage.
In any RAID operation, all storage blocks are touched - being a resync, reshape, ... it does not matter of the RAID storage blocks are occupied or empty - the only exception is resync with Bitmaps enabled where only changed blocks are dealed with.
In any RAID operation, all storage blocks are touched - being a resync, reshape, ... it does not matter of the RAID storage blocks are occupied or empty - the only exception is resync with Bitmaps enabled where only changed blocks are dealed with.
- Don
- Guru
- Posts: 12289
- Joined: Thu Jan 03, 2008 4:56 am
- Location: Long Island, New York
Re: [HOWTO] How to increase raid rebuild speed
Goal - it's possible you have a bad drive. This will increase rebuild times because of the increased errors recovery.
Use the forum search feature before posting.
Use RAID and external backups. RAID will protect you from disk failure, keep your system running, and data accessible while the disk is replaced, and the RAID rebuilt. Backups will allow you to recover data that is lost or corrupted, or from system failure. One does not replace the other.
NAS: TVS-882BR | F/W: 5.0.1.2346 | 40GB | 2 x 1TB M.2 SATA RAID 1 (System/VMs) | 3 x 1TB M.2 NMVe QM2-4P-384A RAID 5 (cache) | 5 x 14TB Exos HDD RAID 6 (Data) | 1 x Blu-ray
NAS: TVS-h674 | F/W: 5.0.1.2376 | 16GB | 3 x 18TB RAID 5
Apps: DNSMasq, PLEX, iDrive, QVPN, QLMS, MP3fs, HBS3, Entware, DLstation, VS, +
Use RAID and external backups. RAID will protect you from disk failure, keep your system running, and data accessible while the disk is replaced, and the RAID rebuilt. Backups will allow you to recover data that is lost or corrupted, or from system failure. One does not replace the other.
NAS: TVS-882BR | F/W: 5.0.1.2346 | 40GB | 2 x 1TB M.2 SATA RAID 1 (System/VMs) | 3 x 1TB M.2 NMVe QM2-4P-384A RAID 5 (cache) | 5 x 14TB Exos HDD RAID 6 (Data) | 1 x Blu-ray
NAS: TVS-h674 | F/W: 5.0.1.2376 | 16GB | 3 x 18TB RAID 5
Apps: DNSMasq, PLEX, iDrive, QVPN, QLMS, MP3fs, HBS3, Entware, DLstation, VS, +
-
- Experience counts
- Posts: 1560
- Joined: Mon Feb 07, 2011 5:40 am
- Location: Bratislava, Slovakia
- Contact:
Re: [HOWTO] How to increase raid rebuild speed
These options just tune priority for RAID reshaping. You can tell the kernel how much of disk I/O it should reserver for resyncing and other processes. They won't make your NAS go faster than it can - it's limited by CPU and DISK speeds, you can not tune those by software.gPaq wrote:Although this tip is very much appreciated, and all the commands work on my newly purchased TS-469 Pro, making the changes on "speed_limit_min" makes not a lick of difference, unfortunately. I increased this speed gradually from the default 5000 to 50000, up to 150000, and basically nothing happens. The processor utilization does not go up (hovers around 16% before and after changes), and after 11 hours of re-striping I'm at 17%, which puts my ETA at about 4 more days to completion. That's not reasonable anymore... Is there anything that puts a damper on this configuration in newer models?
experience with administration of UN*X (mostly linux) and applications on internet servers since 1994...
-
- New here
- Posts: 9
- Joined: Fri Mar 15, 2013 10:53 pm
Re: [HOWTO] How to increase raid rebuild speed
Well, for giggles I decided to try this out on my new TS869 which has been plagued with issues new out of the box.
8 * 4TB Hitachi 7200RPM drives, all as one big RAID6 volume. The synchronization of this takes 40+ hours
I've made subtle changes, with no significant results. So I decided to go crazy and see what happened.
Max = 800000
Min = 200000
[/proc/sys/dev/raid] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md0 : active raid6 sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
23432697216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
[======>..............] resync = 31.0% (1211472968/3905449536) finish=1668.9min speed=26902K/sec
md13 : active raid1 sdh4[3](S) sdg4[4](S) sdf4[5](S) sde4[6](S) sdd4[7](S) sdc4[2] sdb4[1] sda4[0]
458880 blocks [3/3] [UUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 1/65 pages [4KB], 4KB chunk
unused devices: <none>
CPU usage is hanging around 42% , temps are normal too. I imagine I'm waiting on disks.
If you are interested in my particular TS869 issue, here's that thread:
http://forum.qnap.com/viewtopic.php?f=45&t=72988
8 * 4TB Hitachi 7200RPM drives, all as one big RAID6 volume. The synchronization of this takes 40+ hours
I've made subtle changes, with no significant results. So I decided to go crazy and see what happened.
Max = 800000
Min = 200000
[/proc/sys/dev/raid] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md0 : active raid6 sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
23432697216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
[======>..............] resync = 31.0% (1211472968/3905449536) finish=1668.9min speed=26902K/sec
md13 : active raid1 sdh4[3](S) sdg4[4](S) sdf4[5](S) sde4[6](S) sdd4[7](S) sdc4[2] sdb4[1] sda4[0]
458880 blocks [3/3] [UUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 1/65 pages [4KB], 4KB chunk
unused devices: <none>
CPU usage is hanging around 42% , temps are normal too. I imagine I'm waiting on disks.
If you are interested in my particular TS869 issue, here's that thread:
http://forum.qnap.com/viewtopic.php?f=45&t=72988
-
- Experience counts
- Posts: 1560
- Joined: Mon Feb 07, 2011 5:40 am
- Location: Bratislava, Slovakia
- Contact:
Re: [HOWTO] How to increase raid rebuild speed
Have you tried to tune /sys/block/md0/md/stripe_cache_size if it helps? I think 26902K/sec is quite low for such disks
experience with administration of UN*X (mostly linux) and applications on internet servers since 1994...
-
- New here
- Posts: 9
- Joined: Fri Mar 15, 2013 10:53 pm
Re: [HOWTO] How to increase raid rebuild speed
It's at the default of 4096. Any suggestions of a value to use?
-
- Experience counts
- Posts: 1560
- Joined: Mon Feb 07, 2011 5:40 am
- Location: Bratislava, Slovakia
- Contact:
Re: [HOWTO] How to increase raid rebuild speed
above in this thread I mentioned increasing to 16384 increased the build speed. you can try 8192, 16384, 32768 if they will help (wait some time after each change to let it work). You just need to be careful - it eats system memory, so when it stops looking better, leave it or change to last smaller value
experience with administration of UN*X (mostly linux) and applications on internet servers since 1994...
-
- New here
- Posts: 9
- Joined: Fri Mar 15, 2013 10:53 pm
Re: [HOWTO] How to increase raid rebuild speed
After another roughly 40 hours, the sync finished. I've enabled the bitmap, hopefully it's "good" this time.
-
- New here
- Posts: 6
- Joined: Sun Jan 17, 2010 9:46 am
Re: [HOWTO] How to increase raid rebuild speed
TS-809 Pro running FW 3.8.2. Upgraded hard drives this past week and now performing a RAID6 reshape from 4 drives to 5 drives (all Seagate ST4000DM000s). Started the process last night, and it was only 20% when I woke up (according to mdstat). Found this thread and tried a few different values for speed_limit_min, speed_limit_max, and stripe_cache_size.
CPU core 1 was between 19% and 25% for all values tested (running md0_raid6). CPU core 2 was between 4% and 7% (running md0_reshape). CPU has headroom, and I've stopped all non-esssential services.
Best speed I got was 23.8K/sec on the reshape, and that was with the default values (min: 50000, max: 100000, stripe_cache_size: 4096). Increasing the min had very little effect. Increasing the stripe_cache_size used more memory but dropped speed of reshape as follows
stripe_cache_size: 8192 --> speed: 22.2K/sec
stripe_cache_size: 16384 --> speed: 20K/sec
So, I've gone back to the defaults for now and will wait out the remaining ~37 hours.
Also, when I tried to lower stripe_cache_size, I found I had to set 4096 with the echo command several times before it would drop back down to 4096.
CPU core 1 was between 19% and 25% for all values tested (running md0_raid6). CPU core 2 was between 4% and 7% (running md0_reshape). CPU has headroom, and I've stopped all non-esssential services.
Best speed I got was 23.8K/sec on the reshape, and that was with the default values (min: 50000, max: 100000, stripe_cache_size: 4096). Increasing the min had very little effect. Increasing the stripe_cache_size used more memory but dropped speed of reshape as follows
stripe_cache_size: 8192 --> speed: 22.2K/sec
stripe_cache_size: 16384 --> speed: 20K/sec
So, I've gone back to the defaults for now and will wait out the remaining ~37 hours.
Also, when I tried to lower stripe_cache_size, I found I had to set 4096 with the echo command several times before it would drop back down to 4096.
- doktornotor
- Ask me anything
- Posts: 7472
- Joined: Tue Apr 24, 2012 5:44 am
Re: [HOWTO] How to increase raid rebuild speed
No comment.AUSTraveler wrote:Upgraded hard drives this past week and now performing a RAID6 reshape from 4 drives to 5 drives (all Seagate ST4000DM000s).
I'm gone from this forum till QNAP stop wasting volunteers' time. Get help from QNAP helpdesk instead.
Warning: offensive signature and materials damaging QNAP reputation follow:
QNAP's FW security issues
QNAP's hardware compatibility list madness
QNAP's new logo competition
Dear QNAP, kindly fire your clueless incompetent forum "admin" And while at it, don't forget the webmaster!
Warning: offensive signature and materials damaging QNAP reputation follow:
QNAP's FW security issues
QNAP's hardware compatibility list madness
QNAP's new logo competition
Dear QNAP, kindly fire your clueless incompetent forum "admin" And while at it, don't forget the webmaster!
-
- New here
- Posts: 6
- Joined: Sun Jan 17, 2010 9:46 am
Re: [HOWTO] How to increase raid rebuild speed
Then why post?doktornotor wrote:No comment.AUSTraveler wrote:Upgraded hard drives this past week and now performing a RAID6 reshape from 4 drives to 5 drives (all Seagate ST4000DM000s).
-
- Starting out
- Posts: 17
- Joined: Mon Jan 28, 2008 8:02 pm
Re: [HOWTO] How to increase raid rebuild speed
What is the actual command to use to increase this value? my TS-409 is set to 256.There's one more option for rebuilding RAID5/RAID6 devices:
# cat /sys/block/md0/md/stripe_cache_size
4096
increasing this could help, last time I have tried 16384 (4x more).
Changing the min speed to value 50000 hasnt changed anything
- schumaku
- Guru
- Posts: 43578
- Joined: Mon Jan 21, 2008 4:41 pm
- Location: Kloten (Zurich), Switzerland -- Skype: schumaku
- Contact:
Re: [HOWTO] How to increase raid rebuild speed
most discussions hae are related to newer NAS models. Be aware things are changing on the newer firmware releases once more for the models still under maintenance.musashi77 wrote:What is the actual command to use to increase this value? my TS-409 is set to 256.
Lack of a TS-409 ... is this the output you see the 256?
# cat /sys/block/md0/md/stripe_cache_size
256
To change, you can echo a number in:
# echo 512 > /sys/block/md0/md/stripe_cache_size
# cat /sys/block/md0/md/stripe_cache_size
...
You are aware that your TS-409 is not a racing machine?musashi77 wrote:Changing the min speed to value 50000 hasnt changed anything
-
- Starting out
- Posts: 17
- Joined: Mon Jan 28, 2008 8:02 pm
Re: [HOWTO] How to increase raid rebuild speed
Correct the output is 256.
# cat /sys/block/md0/md/stripe_cache_size
256
However the following cmmand does nothing to change it:
# echo 512 > /sys/block/md0/md/stripe_cache_size
# cat /sys/block/md0/md/stripe_cache_size
256
Im aware that the 409 isnt going to break any land speed records, and had anticaped it would take some time, however it was initially looking like it would take more then 2 weeks, that is really pushing the limitations of my (and the girlfriends) patience. Tweaking the speed_limit_min to 50000-80000 has doubled the speed and effectively halved the time. Still not great though, speed=5514K/sec.
EDIT: I mentioned earlier that changing the speed_limit_min to 50000 didnt change anything, however it did increase the speeds from around 2000K/sec up to 5-6000K/sec. Ill take whatever I can get at this stage.
# cat /sys/block/md0/md/stripe_cache_size
256
However the following cmmand does nothing to change it:
# echo 512 > /sys/block/md0/md/stripe_cache_size
# cat /sys/block/md0/md/stripe_cache_size
256
Im aware that the 409 isnt going to break any land speed records, and had anticaped it would take some time, however it was initially looking like it would take more then 2 weeks, that is really pushing the limitations of my (and the girlfriends) patience. Tweaking the speed_limit_min to 50000-80000 has doubled the speed and effectively halved the time. Still not great though, speed=5514K/sec.
EDIT: I mentioned earlier that changing the speed_limit_min to 50000 didnt change anything, however it did increase the speeds from around 2000K/sec up to 5-6000K/sec. Ill take whatever I can get at this stage.
- schumaku
- Guru
- Posts: 43578
- Joined: Mon Jan 21, 2008 4:41 pm
- Location: Kloten (Zurich), Switzerland -- Skype: schumaku
- Contact:
Re: [HOWTO] How to increase raid rebuild speed
That's about all what is possible...