Discussion on setting up QNAP NAS products.
-
jad albert
- New here
- Posts: 2
- Joined: Thu Oct 20, 2016 10:53 pm
Post
by jad albert » Thu Oct 20, 2016 11:21 pm
P3R wrote:Ifti wrote:Same here - adjusted mine to 200000 and my CPU doesn't go over 2%, so I can only assume its not working!!
My assumption would instead be that your very fast CPU can easily manage this task already without changing this setting and that your disks are the performance bottlenecks.
I have a TVS-871T (i7 model)
This thread was started in a time when the
fastest NAS CPUs had someting like 1/15th or 1/20th of the processing capacity your CPU have.
[~] # egrep speed /proc/mdstat
[=>...................] resync = 9.1% (534795552/5850567168) finish=639.6min speed=138511K/sec
Looks decent to me. What's wrong with it?
Thanks for sharing!
Last edited by
jad albert on Fri Oct 21, 2016 8:05 pm, edited 1 time in total.
-
Trexx
- Ask me anything
- Posts: 5321
- Joined: Sat Oct 01, 2011 7:50 am
- Location: Minnesota
Post
by Trexx » Fri Oct 21, 2016 12:27 am
Just a note: For migrations from Raid1-Raid5, the first 50% of the rebuild will tend to be slow, not sure specially was going on behind the scenes,but once it hits the 50%+ your throughput should increase significantly. In my case, it was about 3-4x of that it was between 0-50%.
-
avvidme
- Know my way around
- Posts: 185
- Joined: Fri Jan 16, 2009 10:36 am
-
Contact:
Post
by avvidme » Mon Dec 05, 2016 8:48 am
Hi everyone, I'm doing a Raid 5 to 6 migration on a TS-853 Pro on new WD 4TB drives (7 in total).
Here's the status:
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid6 sdf3[6] sdg3[0] sdd3[5] sdc3[4] sdb3[3] sda3[2] sdh3[1]
19485318080 blocks super 1.0 level 6, 64k chunk, algorithm 18 [7/6] [UUUUUU_]
[====>................] reshape = 21.0% (820684800/3897063616) finish=16520.9min speed=3103K/sec
No matter ANY setting I use such as
echo 800000 >/proc/sys/dev/raid/speed_limit_max
echo 400000 >/proc/sys/dev/raid/speed_limit_min
And cache size up to 32k. The speed of 3103k just won't change, and the CPU utilization is next to nothing.
After 4 days, I still have about 10 days to go!!!!!
HELP! is there ANY other way to increase the speed of this migration???
Yes I have Bitmap on, but is it safe to turn it off during the migration?
Help!
-
Spider99
- Experience counts
- Posts: 1950
- Joined: Fri Oct 21, 2011 11:14 pm
- Location: UK
Post
by Spider99 » Mon Dec 05, 2016 11:23 am
missing disk is affecting rebuild/migration?
Tim
TS-853A(16GB): - 4.3.4.0483 - Static volume - Raid5 - 8 x 4TB HGST Deskstar NAS
Windows Server + StableBit Drivepool and Scanner ~115 TB Backup Server
TS-412 & TS-459 Pro II: Retired
Clients: 3 x Windows 10 Pro(64bit)
-
P3R
- Guru
- Posts: 12568
- Joined: Sat Dec 29, 2007 1:39 am
- Location: Stockholm, Sweden (UTC+01:00)
Post
by P3R » Mon Dec 05, 2016 3:11 pm
avvidme wrote:The speed of 3103k just won't change, and the CPU utilization is next to nothing.
I wouldn't worry about the low CPU utilization. Today's CPUs are much faster so are unlikely to be a bottleneck. It's usually the disks that's the limitation.
Do you have anything else active that access the disks? Active clients in the network and/or active services/apps on the NAS? Virtualization, torrent downloading or anything else?
After 4 days, I still have about 10 days to go!!!!!
That prediction isn't necessarily accurate.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!
A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.
All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!
-
avvidme
- Know my way around
- Posts: 185
- Joined: Fri Jan 16, 2009 10:36 am
-
Contact:
Post
by avvidme » Mon Dec 05, 2016 9:35 pm
Thanks for your reply;
Do you have anything else active that access the disks? Active clients in the network and/or active services/apps on the NAS? Virtualization, torrent downloading or anything else?
After 4 days, I still have about 10 days to go!!!!!
That prediction isn't necessarily accurate.[/quote]
Nothing, I shut off all applications before I started the migration. Bare bones services running.
-
avvidme
- Know my way around
- Posts: 185
- Joined: Fri Jan 16, 2009 10:36 am
-
Contact:
Post
by avvidme » Mon Dec 05, 2016 9:35 pm
Spider99 wrote:missing disk is affecting rebuild/migration?
No missing disk, it's an online spare.
J
-
P3R
- Guru
- Posts: 12568
- Joined: Sat Dec 29, 2007 1:39 am
- Location: Stockholm, Sweden (UTC+01:00)
Post
by P3R » Mon Dec 05, 2016 10:18 pm
avvidme wrote:No missing disk, it's an online spare.
In
the manual for the older cat1 models you can find this:
Note: A hot spare drive must be removed from the disk volume before executing the following action:
• Online RAID capacity expansion
• Online RAID level migration
• Adding a hard drive member to a RAID 5, RAID 6 or RAID 10 volumeI would have expected the same to apply to cat2 as well but I don't know.

RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!
A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.
All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!
-
avvidme
- Know my way around
- Posts: 185
- Joined: Fri Jan 16, 2009 10:36 am
-
Contact:
Post
by avvidme » Mon Dec 05, 2016 11:23 pm
P3R wrote:avvidme wrote:No missing disk, it's an online spare.
In
the manual for the older cat1 models you can find this:
Note: A hot spare drive must be removed from the disk volume before executing the following action:
• Online RAID capacity expansion
• Online RAID level migration
• Adding a hard drive member to a RAID 5, RAID 6 or RAID 10 volumeI would have expected the same to apply to cat2 as well but I don't know.

Hi, sorry I should've been more specific - the drive is physically in the slot, but it's NOT configured for anything yet. It'll become an online spare once the RAID 6 migration is done.
I'm just blown away that I'm only getting 3100k from an 853. I even set the min/max to very high numbers (800000, etc) and no effect.
Just stumped here.
-
P3R
- Guru
- Posts: 12568
- Joined: Sat Dec 29, 2007 1:39 am
- Location: Stockholm, Sweden (UTC+01:00)
Post
by P3R » Tue Dec 06, 2016 12:01 am
avvidme wrote:I even set the min/max to very high numbers (800000, etc) and no effect.
If a car doesn't have the performance to reach the speed limit, a raised limit won't make it go faster. The question is why your 853 moves at Trabant speed?
A bad disk slowing things down maybe? You could check the SMART data on all disks to see if that gives you any hint.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!
A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.
All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!
-
avvidme
- Know my way around
- Posts: 185
- Joined: Fri Jan 16, 2009 10:36 am
-
Contact:
Post
by avvidme » Tue Dec 06, 2016 12:23 am
P3R wrote: A bad disk slowing things down maybe? You could check the SMART data on all disks to see if that gives you any hint.
Smart status looks perfect across all of them. They are the WD RED 4TB WD40EFRX drives, SATA3. Normal NAS performance (read/write) has been just fine under RAID 5.
But I decided to migrate to 6 just for better data protection. It's been 5 days running non-stop so far, I'm starting to believe it'll be another 9 days as estimated.
I guess there's no other speed adjustments outside of min/max/cache?
Thanks again
J
-
avvidme
- Know my way around
- Posts: 185
- Joined: Fri Jan 16, 2009 10:36 am
-
Contact:
Post
by avvidme » Tue Dec 06, 2016 12:59 am
Unfortunately, I have bitmap on right now which I'm sure isn't helping. Is it possible to turn it off WHILE the migration is running?
Doesn't seem to be an option in the GUI right now since a number of menu items aren't available during migration. If it's safe to do, is there a command line option?
Thanks again
-
Spider99
- Experience counts
- Posts: 1950
- Joined: Fri Oct 21, 2011 11:14 pm
- Location: UK
Post
by Spider99 » Tue Dec 06, 2016 8:45 am
you are casting about in the dark - bitmap helps in raid rebuild
you have a bad disk or some other config issue thats slowing the reshape - if your array was "full" before the start that will have a detrimental effect
when i built my 8 disk raid 5 - new no data it took a day and a half on my 853 to sync - with speeding it up - you are reshaping - a lot more complicated and significant change and hence slower process - actually i have a hunch the speed up options have little or no effect on a reshape
I also suspect once it gets past 50% and /or the data it will speed up a lot
Tim
TS-853A(16GB): - 4.3.4.0483 - Static volume - Raid5 - 8 x 4TB HGST Deskstar NAS
Windows Server + StableBit Drivepool and Scanner ~115 TB Backup Server
TS-412 & TS-459 Pro II: Retired
Clients: 3 x Windows 10 Pro(64bit)
-
chodaboy19
- Know my way around
- Posts: 196
- Joined: Thu Nov 26, 2009 4:28 am
-
Contact:
Post
by chodaboy19 » Mon Dec 19, 2016 5:06 am
Code: Select all
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdd3[11] sdc3[10] sdb3[9] sda3[8] sdf3[7] sde3[6]
39020358080 blocks super 1.0 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
[============>........] resync = 60.7% (4741593912/7804071616) finish=602.8min speed=84663K/sec
md256 : active raid1 sdd2[7](S) sdc2[6] sdb2[2] sda2[3](S) sdf2[4](S) sde2[5](S)
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdd4[29] sdc4[28] sdb4[27] sda4[26] sdf4[25] sde4[24]
458880 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sdd1[29] sdc1[28] sdb1[27] sda1[26] sdf1[25] sde1[24]
530112 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
Upgrading from 6 x 4TB to 6 x 8TB, I set the min to 85000 and it's working nicely!
TS-670 v4.3.6 (20191212)
i7-3770S
16GB DDR3 1600MHz 1.35v (F3-1600C9D-16GRSL)
6 x 8TB WD (WD80EFZX)
1 x QM2-2P-384
2 x ADATA XPG SX8200 Pro 1.0TB
-
avvidme
- Know my way around
- Posts: 185
- Joined: Fri Jan 16, 2009 10:36 am
-
Contact:
Post
by avvidme » Tue Dec 20, 2016 11:25 pm
chodaboy19 wrote:Upgrading from 6 x 4TB to 6 x 8TB, I set the min to 85000 and it's working nicely!
I wasn't just expanding the raid, I was also converting from Raid 5 to 6.
J