Just wanted to share my experiences. Have a TS-509 Pro that had 750GB x5 drives that I expanded with 2TB WD20EARS drives. After replacing each drive sequentially, I am now waiting for the RAID to complete expansion. I found this thread because the expansion process was S-L-O-W!
Using Don's excellent tip, I found the following:
set_limit_min was default at 5000
set_limit_max was default at 200000
Under default conditions, I was getting "spikes" in CPU usage rather than a constant level of CPU activity. Peaks hit perhaps 30% followed by valleys of near-0% usage. The CPU usage monitor looked like a saw-tooth pattern because of this.
I used the ECHO command posted by Don to set_limit_min to 50000. After that, I got a constant level of CPU activity of about 50%. I tried increasing the MIN even higher, to 75000, 100000, or even 150000 without any further increase in CPU usage. Makes me think I hit the bottleneck imposed by the drives themselves, since these are "Green" drives that I am using. Still, I'm happy for the increase in rebuild speed, yay! I would up setting for a MIN of 750000.
One question though: in the web interface, the RAID Management screen shows that my current rebuild is at 32%. However, the cat /proc/mdstat command shows:
resync = 64.7$ finish = 224.8 min speed 50974K/sec
Any idea why there is a big discrepancy between what mdstat reports and what the RAID Management web interface shows?