TS-230 IronWolf heads seeking constantly

Discussion about hard drive spin down (standby) feature of NAS.
Post Reply
yorky2010
Starting out
Posts: 20
Joined: Sat May 16, 2020 8:30 am

TS-230 IronWolf heads seeking constantly

Post by yorky2010 »

Has anyone had Ironwolf's in a RAID1 constantly head seeking? When I first bought it and hadn't put much data on it, it would sit there quietly and spin down after a bit.

Now after pushing 1TB of data onto it including a million (literally) odd small files it's been head seeking for weeks. QNAP doesn't show any activity. Write speed is still ok around 100MB/s. Latest firmware. SMART and IHM say ok.

I was wondering if it was indexing as that is a ton of files, but how can I check that properly? The resource monitor storage resource is not showing any activity other than write of 2 to 4/s IOPS in disk activity? Also 12-15KB/s.

It was mentioned here https://forums.whirlpool.net.au/thread/35q65j83 that there may be excessive writes to the log?

Here is a recording of what it sounds like https://www.dropbox.com/s/s4a3uhxetp30t ... 0.mp3?dl=0
yorky2010
Starting out
Posts: 20
Joined: Sat May 16, 2020 8:30 am

Re: TS-230 IronWolf heads seeking constantly

Post by yorky2010 »

Here is an output of blkdevMonitor_20151225.sh

Code: Select all

============= 0/100 test, Thu May 14 15:22:05 WST 2020 ===============
<<<<<7>[1204773.461693] rsyslogd(17615): READ block 326016 on md256 (8 sectors)
<7>[1204773.461763] rsyslogd(17615): READ block 326024 on md256 (8 sectors)
<7>[1204773.461778] rsyslogd(17615): READ block 326032 on md256 (8 sectors)
<7>[1204773.461789] rsyslogd(17615): READ block 326040 on md256 (8 sectors)

============= 1/100 test, Thu May 14 15:22:10 WST 2020 ===============
<<7>[1204792.181747] rsyslogd(17615): READ block 328288 on md256 (8 sectors)

============= 2/100 test, Thu May 14 15:22:29 WST 2020 ===============
<7>[1204803.021253] jbd2/dm-11-8(3746): WRITE block 2143897320 on dm-11 (8 sectors)
<7>[1204803.021253] jbd2/dm-11-8(3746): WRITE block 2143897320 on dm-11 (8 sectors)
<<<<<<<<7>[1204801.251746] rsyslogd(17615): READ block 403456 on md256 (8 sectors)
<7>[1204801.251769] rsyslogd(17615): READ block 403464 on md256 (8 sectors)
<7>[1204801.251780] rsyslogd(17615): READ block 403472 on md256 (8 sectors)
<7>[1204801.251790] rsyslogd(17615): READ block 403480 on md256 (8 sectors)
<<<7>[1204801.501723] rsyslogd(17615): READ block 327776 on md256 (8 sectors)
<<<<7><7>[1204802.981745] rsyslogd(17615): READ block 325384 on md256 (8 sectors)
<7>[1204803.025187] rsyslogd(17615): READ block 325376 on md256 (8 sectors)
<7>[1204803.065265] rsyslogd(17615): READ block 325360 on md256 (8 sectors)
<7>[1204803.065315] rsyslogd(17615): READ block 325368 on md256 (8 sectors)

============= 3/100 test, Thu May 14 15:22:40 WST 2020 ===============
<7>[1204811.301827] kjournald(1884): WRITE block 953912 on md9 (8 sectors)
<7>[1204811.371688] kjournald(1884): WRITE block 32672 on md9 (8 sectors)
<7>[1204807.291685] rsyslogd(17615): READ block 339144 on md256 (8 sectors)
<<<<<<<<7>[1204810.041723] rsyslogd(17615): READ block 283040 on md256 (8 sectors)
<7>[1204810.041761] rsyslogd(17615): READ block 283056 on md256 (8 sectors)
<7>[1204810.041777] rsyslogd(17615): READ block 283064 on md256 (8 sectors)
<<<7>[1204810.201802] rsyslogd(17615): READ block 659648 on md256 (8 sectors)
<7>[1204810.201881] rsyslogd(17615): READ block 659656 on md256 (8 sectors)
<<7>[1204811.301724] rsyslogd(17615): READ block 282376 on md256 (8 sectors)
<<7>[1204811.371680] rsyslogd(17615): READ block 330776 on md256 (8 sectors)
<7>[1204811.371954] rsyslogd(17615): READ block 339888 on md256 (8 sectors)
<<<7>[1204811.681647] rsyslogd(17615): READ block 94552 on md256 (8 sectors)
<<7>[1204812.421904] rsyslogd(17615): READ block 335864 on md256 (8 sectors)

============= 4/100 test, Thu May 14 15:22:50 WST 2020 ===============
<<7>[1204816.001766] rsyslogd(17615): READ block 333200 on md256 (8 sectors)

============= 5/100 test, Thu May 14 15:22:52 WST 2020 ===============
<7>[1204821.464960] rsyslogd(18721): WRITE block 953944 on md9 (8 sectors)
<7>[1204821.465691] kjournald(1884): WRITE block 32728 on md9 (8 sectors)
<7>[1204821.465709] kjournald(1884): WRITE block 32736 on md9 (8 sectors)
<7>[1204821.465722] kjournald(1884): WRITE block 32744 on md9 (8 sectors)
<7>[1204821.465737] kjournald(1884): WRITE block 32752 on md9 (8 sectors)
<7>[1204821.465753] kjournald(1884): WRITE block 32760 on md9 (8 sectors)
<7>[1<7>[<<<<<7>[1204821.451630] rsyslogd(17615): READ block 473424 on md256 (8 sectors)

============= 6/100 test, Thu May 14 15:22:57 WST 2020 ===============
<7>[1204851.732261] kjournald(1884): WRITE block 954000 on md9 (8 sectors)
Last edited by yorky2010 on Sun May 17, 2020 3:10 pm, edited 1 time in total.
yorky2010
Starting out
Posts: 20
Joined: Sat May 16, 2020 8:30 am

Re: TS-230 IronWolf heads seeking constantly

Post by yorky2010 »

Here are the top processes:

Code: Select all

Mem: 1244404K used, 333652K free, 24940K shrd, 354776K buff, 159456K cached
CPU: 13.1% usr 3.9% sys 0.2% nic 82.1% idle 0.3% io 0.0% irq 0.0% sirq
Load average: 1.08 0.63 0.48 2/1062 9333
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
2882 1 admin S 1521m 98.6 2 0.5 /sbin/hal_daemon -f
7038 1 admin S 42180 2.6 3 0.4 /mnt/ext/opt/Python/bin/python /mnt/ext/opt/netmgr/api/core/asd.pyc
28194 1 admin S N 1062m 68.8 2 0.3 python /share/CE_CACHEDEV1_DATA/.qpkg/HybridBackup/CloudConnector3/python/bin/sync config.db b584f67a-8a2e-11ea-81f8-245ebe3c67c1 start
27891 1 admin S N 1062m 68.9 2 0.3 python /share/CE_CACHEDEV1_DATA/.qpkg/HybridBackup/CloudConnector3/python/bin/sync config.db 983e8614-8bbc-11ea-a620-245ebe3c67c1 start
26664 1 admin S 1295m 83.9 3 0.2 {cc3-fastcgi} python /share/CE_CACHEDEV1_DATA/.qpkg/HybridBackup/CloudConnector3/python/bin/cc3-fastcgi -w /share/CE_CACHEDEV1_DATA/.qpkg/HybridBackup/CloudConnector3 start
28058 1 admin S 274m 17.7 1 0.2 /usr/local/bin/tutk_agent -d 4
26778 1 admin S 333m 21.6 3 0.2 python /share/CE_CACHEDEV1_DATA/.qpkg/HybridBackup/CloudConnector3/python/bin/rra 726b32c2-8c1e-11ea-a620-245ebe3c67c1 start
8726 8402 admin S 105m 6.8 1 0.1 {gwd} python /usr/local/network/nmd/nmd.pyc
28662 1 admin S 5384 0.3 1 0.1 {qdesk_soldier} /bin/sh /sbin/qdesk_soldier
7960 7918 admin R 3428 0.2 0 0.1 top
30710 16854 admin S < 2792m181.0 3 0.0 /usr/local/apache/bin/apache_proxy -k start -f /etc/apache-sys-proxy.conf
16390 16388 admin S 322m 20.9 3 0.0 /usr/local/medialibrary/bin/mytranscodesvr -debug -db /share/CE_CACHEDEV1_DATA/
26802 26778 admin S 273m 17.7 3 0.0 /usr/bin/RTRR -C:/etc/qsync/qsynchbs3.conf -J:Job0
4211 1 admin S 220m 14.3 2 0.0 /usr/sbin/rtk_transcoding_daemon
8870 1 admin S 170m 11.0 1 0.0 /mnt/ext/opt/Python/bin/python /mnt/ext/opt/netmgr/api/core/ip_monitor.pyc
8411 8402 admin S 104m 6.7 1 0.0 {ncaas} python /usr/local/network/nmd/nmd.pyc
18805 1 admin S 76440 4.8 0 0.0 /sbin/rfsd -i -f /etc/rfsd.conf
7027 1 admin S 29320 1.8 0 0.0 /mnt/ext/opt/netmgr/util/redis/redis-server *:0
16715 1 admin S 19172 1.2 1 0.0 /usr/local/sbin/_thttpd_ -p 58080 -nor -nos -u admin -d /home/httpd -c **.* -h 127.0.0.1 -i /var/lock/._thttpd_.pid
7864 29279 admin S 18624 1.1 0 0.0 sshd: admin pts/1
3 2 admin SW 0 0.0 0 0.0 [ksoftirqd/0]
20 2 admin SW 0 0.0 3 0.0 [ksoftirqd/3]
12 2 admin SW 0 0.0 1 0.0 [ksoftirqd/1]
3042 2 admin SW 0 0.0 0 0.0 [md1_raid1]
1752 2 admin SW< 0 0.0 2 0.0 [kworker/2:1H]
7236 2 admin SW 0 0.0 2 0.0 [kworker/2:2]
20084 17752 admin S 1384m 89.8 0 0.0 /usr/local/apache/bin/apache_proxys -k start -f /etc/apache-sys-proxy-ssl.conf
11660 1 admin S 1384m 89.7 2 0.0 /usr/local/sbin/ncd
15133 14369 httpdusr S 1384m 89.7 2 0.0 /usr/local/apache/bin/apache -k start -c PidFile /var/lock/apache.pid -f /etc/config/apache/apache.conf
10434 1 admin S 852m 55.2 2 0.0 /usr/local/mariadb/bin/mysqld --defaults-file=/usr/local/mariadb/my-mariadb.cnf --basedir=/usr/local/mariadb --datadir=/share/CE_CACHEDEV1_DATA/.system/data --plugin-dir=/usr/local/mariadb/li
18726 1 admin S 491m 31.8 0 0.0 /sbin/upnpd eth0 eth0
11601 1 admin S 484m 31.4 2 0.0 /usr/local/sbin/ncdb --defaults-file=/mnt/ext/opt/NotificationCenter/etc/nc-mariadb.conf
6987 1 admin S 388m 25.1 0 0.0 /mnt/ext/opt/Python/bin/python ./manage.pyc runfcgi method=threaded socket=/tmp/netmgr.sock pidfile=/tmp/netmgr.pid
5468 1 admin S 293m 19.0 2 0.0 /sbin/cs_qdaemon
26796 1 admin S 177m 11.5 1 0.0 tunnelagent
8412 8402 admin S 176m 11.4 1 0.0 {qserviced} python /usr/local/network/nmd/nmd.pyc
16768 1 admin S 168m 10.9 0 0.0 {php-fpm-proxy} php-fpm: master process (/etc/php-fpm-sys-proxy.conf)
14355 1 admin S 168m 10.9 2 0.0 php-fpm: master process (/etc/config/apache/php-fpm.conf)
14356 14355 httpdusr S 168m 10.9 2 0.0 php-fpm: pool www
14357 14355 httpdusr S 168m 10.9 3 0.0 php-fpm: pool www
16769 16768 admin S 168m 10.9 0 0.0 {php-fpm-proxy} php-fpm: pool www
16770 16768 admin S 168m 10.9 0 0.0 {php-fpm-proxy} php-fpm: pool www
28558 28194 admin S N 163m 10.6 3 0.0 qmonitor -client:CloudBackupSync -pid:28194 -filter:0x6403 -m -reg:/share/CE_CACHEDEV1_DATA/Box.com
28189 27891 admin S N 163m 10.6 1 0.0 qmonitor -client:CloudBackupSync -pid:27891 -filter:0x6403 -m -reg:/share/CE_CACHEDEV1_DATA/Dropbox
18810 1 admin S 162m 10.5 3 0.0 /usr/local/bin/rfsd_qmonitor -f:/tmp/rfsd_qmonitor.conf
29556 1 admin S 160m 10.4 3 0.0 /mnt/ext/opt/Python/bin/python2 /sbin/wsd.py
4282 1 admin S 155m 10.0 2 0.0 /sbin/lvmetad
8735 8731 admin S 149m 9.6 2 0.0 /usr/local/bin/rates_monitor_start
20749 1 admin S 147m 9.5 0 0.0 /usr/sbin/rsyslogd -f /etc/rsyslog_only_klog.conf -c4 -M /usr/local/lib/rsyslog/
18882 1 admin S 106m 6.9 1 0.0 /sbin/bcclient
19559 1 admin S 104m 6.8 1 0.0 /sbin/qShield
8728 8402 admin S 104m 6.8 3 0.0 {ressd} python /usr/local/network/nmd/nmd.pyc
19841 1 admin S 98m 6.4 0 0.0 /usr/bin/qsnapman
19546 1 admin S 99248 6.2 1 0.0 qLogEngined: Write log is enabled...
19544 1 admin S 99100 6.2 1 0.0 qNoticeEngined: Write notice is enabled...
26364 1 admin S 96416 6.1 0 0.0 /usr/bin/RTRR_MANAGER
11640 1 admin S 92420 5.8 1 0.0 /usr/local/sbin/ncloud
23308 16244 admin S 46524 2.9 3 0.0 /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
8441 16244 admin S 45772 2.9 1 0.0 /usr/local/samba/sbin/smbd -l /var/log -D -s /etc/config/smb.conf
User avatar
blackbeast
Getting the hang of things
Posts: 50
Joined: Mon Jun 22, 2009 1:39 pm

Re: TS-230 IronWolf heads seeking constantly

Post by blackbeast »

Hi Yorky

Could you please tell us what version of firmware you have installed?

Also, exactly what model of Ironwolf drives do you have. You should be able to find the model number in the disk settings section

From your first log output, it appears that rsyslogd is doing a lot of the activity. Have you tried disabling syslog?
Control Panel -> Syslog Logs -> Syslog Client Management -> Enable Syslog [x] <--- Deselect
NAS: TS-653B | FW 4.4.3.1354 | 6 x 6Tb WD60-ERFX (24TB RAID6) | 8Gb RAM
NAS: TS-453Be | FW 4.4.3.1354 | 4 x 8Tb WD80-EFRX (24TB RAID5) | 4Gb RAM
NAS: TS-559 Pro | FW 4.2.6 | 5 x 3Tb WD30-EFRX (12TB RAID5) | 1Gb RAM
yorky2010
Starting out
Posts: 20
Joined: Sat May 16, 2020 8:30 am

Re: TS-230 IronWolf heads seeking constantly

Post by yorky2010 »

Firmware is now 4.4.2.1302.

Both are ST4000VN008-2DR166

Do you mean Control Panel > Syslog Server? I have disabled enable syslog server.
User avatar
blackbeast
Getting the hang of things
Posts: 50
Joined: Mon Jun 22, 2009 1:39 pm

Re: TS-230 IronWolf heads seeking constantly

Post by blackbeast »

Have you seen this thread item? Might be worth a try:

viewtopic.php?f=55&t=130439#p600880
NAS: TS-653B | FW 4.4.3.1354 | 6 x 6Tb WD60-ERFX (24TB RAID6) | 8Gb RAM
NAS: TS-453Be | FW 4.4.3.1354 | 4 x 8Tb WD80-EFRX (24TB RAID5) | 4Gb RAM
NAS: TS-559 Pro | FW 4.2.6 | 5 x 3Tb WD30-EFRX (12TB RAID5) | 1Gb RAM
yorky2010
Starting out
Posts: 20
Joined: Sat May 16, 2020 8:30 am

Re: TS-230 IronWolf heads seeking constantly

Post by yorky2010 »

I'm not sure what to do with the info in that one :(
User avatar
blackbeast
Getting the hang of things
Posts: 50
Joined: Mon Jun 22, 2009 1:39 pm

Re: TS-230 IronWolf heads seeking constantly

Post by blackbeast »

The idea is to turn off power management on the drive. Ssh into the NAS and try the following on your drive:

Code: Select all

# hdparm -B 255 /dev/sda
# hdparm -B 255 /dev/sdb
If drive APM is the problem it should stop immediately
NAS: TS-653B | FW 4.4.3.1354 | 6 x 6Tb WD60-ERFX (24TB RAID6) | 8Gb RAM
NAS: TS-453Be | FW 4.4.3.1354 | 4 x 8Tb WD80-EFRX (24TB RAID5) | 4Gb RAM
NAS: TS-559 Pro | FW 4.2.6 | 5 x 3Tb WD30-EFRX (12TB RAID5) | 1Gb RAM
yorky2010
Starting out
Posts: 20
Joined: Sat May 16, 2020 8:30 am

Re: TS-230 IronWolf heads seeking constantly

Post by yorky2010 »

Ok so did that, both said "setting Advanced Power Management level to disabled", and the constant seeking noise stopped BUT it makes a few head movement sounds every few seconds and stops just less than before. Also before the 1 and 2 lights on the front flashed every few seconds, now they still do but a more rapid flash. I'll see if they spin down at all like they did when I first got it before putting data on them. EDIT: Actually I'm guessing that won't happen as there are three HBS 3 cloud syncs setup (although stopping these it still does a little head movement every few seconds).
User avatar
blackbeast
Getting the hang of things
Posts: 50
Joined: Mon Jun 22, 2009 1:39 pm

Re: TS-230 IronWolf heads seeking constantly

Post by blackbeast »

The changes are not permanent. They will reset if you power cycle the unit. It was just to test to see if that's the problem. If you want to make these changes permanent you need to alter your autostart.sh (which can be involved, working out which model you have and where the changes need to go).

The issue may be to do with the drives. I don't have a lot of experience with the Ironwolf drives so maybe someone with more experience could comment on that.
NAS: TS-653B | FW 4.4.3.1354 | 6 x 6Tb WD60-ERFX (24TB RAID6) | 8Gb RAM
NAS: TS-453Be | FW 4.4.3.1354 | 4 x 8Tb WD80-EFRX (24TB RAID5) | 4Gb RAM
NAS: TS-559 Pro | FW 4.2.6 | 5 x 3Tb WD30-EFRX (12TB RAID5) | 1Gb RAM
Herve85
First post
Posts: 1
Joined: Fri Jun 05, 2020 8:32 am

Re: TS-230 IronWolf heads seeking constantly

Post by Herve85 »

Hi yorky2010,

I have exactly the same issue with my TS-230 and Ironwolf 4TB drive...
Did you find a solution to stop this annoying noise?

I only have few GB of data so I can perform some tests or even completely reset my NAS if needed.
yorky2010
Starting out
Posts: 20
Joined: Sat May 16, 2020 8:30 am

Re: TS-230 IronWolf heads seeking constantly

Post by yorky2010 »

Nah I haven't done anything further unfortunately.
manto235
New here
Posts: 5
Joined: Sat Aug 08, 2020 3:15 pm

Re: TS-230 IronWolf heads seeking constantly

Post by manto235 »

Hi,

I'm having exactly the same situation as both of you (identical NAS and disks).
I could stop the constant activity with hdparm (-B 255).

However, I'm still having issues to spin down the disks after inactivity.
If I change the timeout disk standby mode to 5 minutes in the QTS control panel, the disk spin down but wake up just after.
I stopped all the services that I could.

If I run the blkdevMonitor script, the disk never spin down (seems like a user connected by the GUI or even by SSH forbids the system to standby).

Here is my blkdevMonitor log: https://pastebin.com/c2D6QZ8P
As you can see, there is sometines no log for 20 minutes...
TS-230 | FW 4.4.3.1400 | 2 x 4 TB Seagate IronWolf (ST4000VN008) - RAID 1
system11
New here
Posts: 5
Joined: Fri Jan 28, 2011 5:02 am

Re: TS-230 IronWolf heads seeking constantly

Post by system11 »

I have the same issue, I had some WD red drives, one failed so I decided to upgrade capacity and chose Ironwolf ones. The noise is maddening. A constant low level chattering every time the disk is idle. Funny thing is if you hammer it with a raid resync they're more or less silent. I asked Seagate about this and their response was to offer me RMA, which seems pointless when it's clearly related to the IHM features scanning it when 'offline'. Fortunately I've only had them for a week so they're going back to Amazon..
pjee
Starting out
Posts: 12
Joined: Fri Aug 26, 2011 3:44 pm

Re: TS-230 IronWolf heads seeking constantly

Post by pjee »

+1 here

I wonder if it is related to the legacy firmware i have a TS219p+ also with the same ironwolf drives, getting crazy of the noise and I can tell you it has not ALWAYS been like this. Some things changed in firmware over the last years. Next to drives not powering down anymore, with the 512mb ram and latest (october) firmware also the clamscan cant startup (it overflows ram memory and then kills), malware scanner not working.
If you add it all up, one would make the machine suspicious for a virus or malware, but i checked this and even the (very helpful!) qnap helpdesk checked and the system is clean.
I think its just early signals of ' end of life', almost literally.

Anyone resolved the issue meanwhile?

Piet
Post Reply

Return to “HDD Spin Down (HDD Standby)”