The patch notes and download link for the latest QuTSHero update still work but are unlisted for some reason. Maybe it was by mistake or maybe there was a problem with the release. If you can't wait until it's relisted here are the links. As a disclaimer you may want to update your backups first and be aware that if there are issues downgrading to h4.5.3 or before isn't supported. https://www.qnap.com/en/release-notes/q ... 1/20210825 https://download.qnap.com/Storage/QuTSh ... 4.1771.zip
mikael123456 wrote: ↑Wed Sep 01, 2021 11:04 am
. . . I still see some writes to md9 and md13, the "span all drives raid1" devices. . . .
As I understand it the main issue with h4.5.3/h4.5.2 comes from ZFS subsystem activity unrelated to md9 and md13. If you don't have the ZFS issue and then fail your spinning disks from md9/md13 it should allow them to spin down when idle. Note that the next time you reboot your system md9 and md13 will be rebuilt across all the drives and include them again. I've been doing this on h4.5.1 with two nvme drives in raid1 as my system pool.
I have a TVS-871 and my disks just started doing this as well. I do run a PLEX server which I do serve out on the internet via port forwarding. But I have had this PLEX server for years and it never did this. I disabled everything I possibly could to keep what I need running, got rid of Multimedia Console, etc.
My disk sound like they are in an endless loop of writing every second. This is annoying because the NAS is in my office but I'm also concerned about the added wear and tear it must be putting on my disks. They even spin like this when no one is accessing the PLEX server or my network disk share.
I am running RAID 6 with 8 disks of 6TB each. It never did this before.
I'm running firmware 5.0.0.1828. My NAS has 4GB of RAM and I have more than half of it free (according to Resource Monitor).
Is there a better way to drill down and see what is causing the disks to write like this constantly/nonstop?
Hello there.
Has anyone resolved this issue?
Ok my Raid 1 I have continuous hdd activity all day long, like if they are working constantly, even with computer turned off and nothing communicating with the Nas.
Would really like to lower their noise a bit..
Thank you.
Sent from my SM-G970F using Tapatalk
QNAP TS-251+ with 2x Seagate IronWolf 4Tb (ST4000VN008)
I am having this problem running latest QutS 5 firmware, I have used "iotop" and "zpool iostat zpool1 1" to try and troubleshoot.
iotop shows basically no writes to the filesystem.
"zpool iostat zpool1 1" shows about 10MB writes every 5-10 second to the disk.
I noticed this and started looking into it because my SSDs are wearing alot faster then I expected. I calculated it's writing 86GB a day (or 1TB every 12 days) to a set of SSD that are storing 50GB of data.
Definitely something up with the firmware implementation, seems to be related to the ZFS side of things.
I have opened a ticket in relation to this hopefully we can get this solved.
After a back and forth with QNAP support and them remotely accessing my NAS to diagnose, they assured me this is normal behaviour (it is not).
The IO activity was significant, and continued to wear my SSDs at unusually high rate, another 1% of life was removed within 1 week (500GB Evo Plus NVME with only 50GB allocated the rest is over provisioned).
The problem seems to be caused by any program using many small writes (databases), this includes Qmail and any docker containers using databases.
I have now solved this problem by adjusting the zfs txg timeout from 5 seconds to 120 seconds by adding the following line to my startup script.
There is a slight increase in risk to data making this change, and you should read up and understand this before doing it yourself, but the daily writes to my system volume have been reduced from ~86GB down to ~5GB a day.
This is really only a stop-gap solution, as there is some sort of underlying problem with write amplification occurring with small transaction, a less than 1KB write is turning into 100KB-500KB write.
My understanding of ZFS is that the block size (ZFS recordsize) is only the maximum size of a logical block and it automatically sizes records as needed, and I tested this by reducing the record size to 4KB which made no difference to the constant write rate. So the only amplification would be related to the SSDs physical block size probably something like 8KB.
pandaMinor wrote: ↑Wed Aug 03, 2022 12:15 pm
I assume QNAP has abandoned fixing this?
My SSDs also have worn down 1% in the 1 week I have owned this device -- which is not filling me with love.
I will try your suggestion (I am on latest - 5.0.0.2069) if this is not going to be repaired soon.
Open a support ticket with them. The more they have to deal with it through support, the more likely they are to address the problem. I provided my fix to them as part of the support ticket.