ES1640DC v2 Scrub Massive Performance Hit
-
- Starting out
- Posts: 27
- Joined: Sat Aug 24, 2019 2:37 am
ES1640DC v2 Scrub Massive Performance Hit
When the ES1640DC v2 I'm using as storage for our production vSphere hosts (16 drives in the equivalent of a RAID 10 array) does a scrub, the performance hit is so severe that all the VMs sitting on it go offline. I've thought about reconfiguring it to be two RAIDZ2 pools in a mirror (os RAID 60) but I don't know if that will help at all. Is a massive performance hit a known issue with data scrubs?
-
- New here
- Posts: 4
- Joined: Mon Feb 08, 2021 2:27 pm
Re: ES1640DC v2 Scrub Massive Performance Hit
Happens with resilvering too, and it has been reported few months ago by me, and they still didn’t fix it. I never buy QNAP QES again.
It means once your HDD crash and you have to replace it and rebuild your raid volume, you can’t serve anything for few days because resilvering IO takes 100% of the IO and changing ratio is just getting ignored.
Super lame thanks QNAP!
It means once your HDD crash and you have to replace it and rebuild your raid volume, you can’t serve anything for few days because resilvering IO takes 100% of the IO and changing ratio is just getting ignored.
Super lame thanks QNAP!
-
- Starting out
- Posts: 27
- Joined: Sat Aug 24, 2019 2:37 am
Re: ES1640DC v2 Scrub Massive Performance Hit
Well crap. For now, I will just disable monthly scrubs. Nice how they modified the default behavior for both a scrub and resilver......koshiro wrote: ↑Fri Jul 02, 2021 8:42 pm Happens with resilvering too, and it has been reported few months ago by me, and they still didn’t fix it. I never buy QNAP QES again.
It means once your HDD crash and you have to replace it and rebuild your raid volume, you can’t serve anything for few days because resilvering IO takes 100% of the IO and changing ratio is just getting ignored.
Super lame thanks QNAP!
-
- New here
- Posts: 4
- Joined: Mon Feb 08, 2021 2:27 pm
Re: ES1640DC v2 Scrub Massive Performance Hit
I think it’s a bug because support tried to change the ratio but nothing happened. But that’s almost half year ago and they still didn’t fix it. And I’m sure they will make a new bug in the fixed firmware in the future.
-
- Starting out
- Posts: 27
- Joined: Sat Aug 24, 2019 2:37 am
-
- New here
- Posts: 4
- Joined: Mon Feb 08, 2021 2:27 pm
Re: ES1640DC v2 Scrub Massive Performance Hit
-
- Starting out
- Posts: 27
- Joined: Sat Aug 24, 2019 2:37 am
Re: ES1640DC v2 Scrub Massive Performance Hit
My ticket number is Q-202107-47579.
Latest from support "It looks like your issue is confirmed as related to the other known Scrubbing issue we have, and the team is still working on a solution for it. I'm still trying to find out more information, like an ETA for it, so I'll update you when I hear more."
Latest from support "It looks like your issue is confirmed as related to the other known Scrubbing issue we have, and the team is still working on a solution for it. I'm still trying to find out more information, like an ETA for it, so I'll update you when I hear more."
-
- Starting out
- Posts: 15
- Joined: Fri Mar 15, 2019 9:22 pm
Re: ES1640DC v2 Scrub Massive Performance Hit
Facing the same issue on the ES1686dc. During a scrub, performance gets so poor that VMs are unable to use their disks residing on VMFS datastores connected on the QNAP via iSCSI. Support has suggested that we run the scrub monthly - which clearly indicates to me they have no understanding of how a virtualized environment utilizes storage. As though it would be acceptable to have VMs crash once a month!
They also offered to put in a "feature request" to be able to control the rate of scrubbing and thereby the amount of performance degradation - as though using this device for one of it's main intended purposes is an afterthought, and a 'feature' they'll consider adding.
They also offered to put in a "feature request" to be able to control the rate of scrubbing and thereby the amount of performance degradation - as though using this device for one of it's main intended purposes is an afterthought, and a 'feature' they'll consider adding.
-
- Starting out
- Posts: 15
- Joined: Fri Mar 15, 2019 9:22 pm
Re: ES1640DC v2 Scrub Massive Performance Hit
Looks like in the 9/16 update to QES they added throttling for RAID rebuild related performance. Still no word on the scrubbing issue yet.
-
- Guru
- Posts: 13190
- Joined: Sat Dec 29, 2007 1:39 am
- Location: Stockholm, Sweden (UTC+01:00)
Re: ES1640DC v2 Scrub Massive Performance Hit
Doesn't it affect scrubbing as well even if not specifically spelled out? It does in QTS and I wouldn't have expected it to be different with ZFS.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!
A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.
All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!
A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.
All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!
-
- Starting out
- Posts: 15
- Joined: Fri Mar 15, 2019 9:22 pm
Re: ES1640DC v2 Scrub Massive Performance Hit
Who knows? Support hasn't updated my ticket on the issue in two months! And how could I tell aside from testing it by waiting to see if my VMs fail?Doesn't it affect scrubbing as well even if not specifically spelled out?
-
- Guru
- Posts: 13190
- Joined: Sat Dec 29, 2007 1:39 am
- Location: Stockholm, Sweden (UTC+01:00)
Re: ES1640DC v2 Scrub Massive Performance Hit
That's really bad!
I guess you can't. My comment was meant as an encouragement that I would expect it to affect scrubbing as well but I'm afraid that I don't know.And how could I tell aside from testing it by waiting to see if my VMs fail?
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!
A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.
All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!
A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.
All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!