Ts-230 random spin up

Discussion about hard drive spin down (standby) feature of NAS.
Post Reply
Alex223
New here
Posts: 7
Joined: Tue Jul 28, 2020 11:14 pm

Ts-230 random spin up

Post by Alex223 » Sun Nov 29, 2020 10:16 pm

Hello Community,

I notice my Nas hdd spin up/down constantly , I terminated all app's that are not really nesssary to avoid them to write/read on the HDD but i cant figure you what is causing this issue.
Currently there is only 1 harddisk installed on this 2 bay Nas system and it has the firmware 4.4.3.1439 installed

I downloaded and executed the blkdev monitor script and let it run for a while, this are the results:

"
===== Welcome to use blkdevMonitor_v2 on Sat Nov 28 17:06:19 CET 2020 =====
Stop klogd.sh daemon... Done
Turn off/on VM block_dump & Clean dmesg
Countdown: 3 2 1
Start...
============= 0/100 test, Sat Nov 28 17:06:29 CET 2020 ===============
<<<<7>[331035.024553] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<<<<7>[331035.024553] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 1/100 test, Sat Nov 28 17:15:52 CET 2020 ===============
<7>[331256.953368] kjournald(1867): WRITE block 1043144 on md9 (8 sectors)
<7>[331256.953432] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[331256.953432] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 2/100 test, Sat Nov 28 17:19:33 CET 2020 ===============
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<7>[332377.016421] logrotate(32318): WRITE block 916072 on md9 (8 sectors)

============= 3/100 test, Sat Nov 28 17:38:14 CET 2020 ===============
<<<7>[332516.303313] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<<<7>[332516.303313] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 4/100 test, Sat Nov 28 17:40:35 CET 2020 ===============
<7>[332637.183636] kjournald(1867): WRITE block 916432 on md9 (8 sectors)
<7>[332637.183690] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[332637.183690] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 5/100 test, Sat Nov 28 17:42:34 CET 2020 ===============
<7>[333214.863794] kjournald(1867): WRITE block 916792 on md9 (8 sectors)
<7>[333214.863843] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[333214.863843] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 6/100 test, Sat Nov 28 17:52:11 CET 2020 ===============
<<<<<<7>[333860.933536] kjournald(1867): WRITE block 917200 on md9 (8 sectors)
<7>[333860.933581] md9_raid1(1854): WRITE block 10602

============= 7/100 test, Sat Nov 28 18:02:57 CET 2020 ===============
<7>[333991.897763] rsyslogd(13077): WRITE block 917312 on md9 (8 sectors)
<7>[333991.897801] kjournald(1867): WRITE block 22704 on md9 (8 sectors)
<7>[333991.897821] kjournald(1867): WRITE block 22712 on md9 (8 sectors)
<7>[333991.898137] kjournald(1867): WRITE block 22720 on md9 (8 sectors)

============= 8/100 test, Sat Nov 28 18:05:13 CET 2020 ===============
<<<<<<7>[334333.019873] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<<<<<<7>[334333.019873] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 9/100 test, Sat Nov 28 18:10:55 CET 2020 ===============
<7>[334403.134828] kjournald(1867): WRITE block 25296 on md9 (8 sectors)
<7>[334403.134861] kjournald(1867): WRITE block 25304 on md9 (8 sectors)
<7>[334403.134918] rsyslogd(13077): WRITE block 919376 on md9 (8 sectors)
<7>[334403.135081] kjournald(1867): WRITE block 25312 on md9 (8 sectors)

============= 10/100 test, Sat Nov 28 18:12:00 CET 2020 ===============
<<<<<<<<<7>[334789.383364] rsyslogd(13077): dirtied inode 21448 (kmsg) on md9
<7>[334789.383382] rsyslogd(13077): dirtied inode 21448 (kmsg) on md9

============= 11/100 test, Sat Nov 28 18:18:27 CET 2020 ===============
<7>[334899.843357] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[334899.843357] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 12/100 test, Sat Nov 28 18:20:21 CET 2020 ===============
<7>[335189.013344] kjournald(1867): WRITE block 919904 on md9 (8 sectors)
<7>[335189.013404] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[335189.013404] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 13/100 test, Sat Nov 28 18:25:07 CET 2020 ===============
<<7><<<<7>[<<7>[335237.843308] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[335237.849364] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<<7><<<<7>[<<7>[335237.843308] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[335237.849364] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 14/100 test, Sat Nov 28 18:25:59 CET 2020 ===============
<7>[335363.013380] kjournald(1867): WRITE block 920024 on md9 (8 sectors)
<7>[335363.013431] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[335363.020420] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[335363.013431] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[335363.020420] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 15/100 test, Sat Nov 28 18:28:01 CET 2020 ===============
<7>[335764.224535] rsyslogd(13077): WRITE block 920320 on md9 (8 sectors)
<7>[335764.224547] kjournald(1867): WRITE block 34160 on md9 (8 sectors)
<7>[335764.224571] kjournald(1867): WRITE block 34168 on md9 (8 sectors)

============= 16/100 test, Sat Nov 28 18:34:41 CET 2020 ===============
<7>[335850.838443] rsyslogd(13077): WRITE block 920384 on md9 (8 sectors)
<7>[335850.838784] kjournald(1867): WRITE block 34736 on md9 (8 sectors)
<7>[335850.838811] kjournald(1867): WRITE block 34744 on md9 (8 sectors)

============= 17/100 test, Sat Nov 28 18:36:13 CET 2020 ===============
<<<<<<<<<<7>[336099.193294] md9_raid1(1854): WRITE block 10

============= 18/100 test, Sat Nov 28 18:40:16 CET 2020 ===============
<7>[336168.233590] kjournald(1867): WRITE block 920624 on md9 (8 sectors)
<7>[336168.233646] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[336168.233646] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 19/100 test, Sat Nov 28 18:41:25 CET 2020 ===============
<<<<<<7<<<7>[336253.033636] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[336253.042052] kworker/u8:2(30106): WRITE block 8 on md9 (8 sectors)
<7>[336253.042084] kworker/u8:2(30106): WRITE block 786704 on md9 (8 sectors)
<<<<<<7<<<7>[336253.033636] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 20/100 test, Sat Nov 28 18:42:55 CET 2020 ===============
<7>[336260.933396] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[336260.933396] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 21/100 test, Sat Nov 28 18:42:57 CET 2020 ===============
<<<<7>[336423.643306] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[336423.653103] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<<<<7>[336423.643306] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[336423.653103] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 22/100 test, Sat Nov 28 18:45:45 CET 2020 ===============
<<7>[336726.643451] kjournald(1867): WRITE block 923512 on md9 (8 sectors)
<7>[336726.643490] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[336726.643490] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 23/100 test, Sat Nov 28 18:50:43 CET 2020 ===============
<7>[336738.001416] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[336738.001416] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 24/100 test, Sat Nov 28 18:50:55 CET 2020 ===============
<7>[336924.004064] rsyslogd(13077): WRITE block 923656 on md9 (8 sectors)
<7>[336924.004262] kjournald(1867): WRITE block 9144 on md9 (8 sectors)
<7>[336924.004282] kjournald(1867): WRITE block 9152 on md9 (8 sectors)
<7>[336924.004297] kjournald(1867): WRITE block 9160 on md9 (8 sectors)
<7>[336924.004314] kjournald(1867): WRITE block 9168 on md9 (8 sectors)
<7>[336924.004330] kjournald(1867): WRITE block 9176 on md9 (8 sectors)
<7>[336924.004582] kjournald(1867): WRITE block 9184 on md9 (8 sectors)

============= 25/100 test, Sat Nov 28 18:54:11 CET 2020 ===============
<7>[337364.214709] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[337364.214709] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 26/100 test, Sat Nov 28 19:01:23 CET 2020 ===============
<7>[337740.273383] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[337740.277677] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[337740.273383] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[337740.277677] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 27/100 test, Sat Nov 28 19:07:40 CET 2020 ===============
<<7>[337864.498715] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<<7>[337864.498715] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 28/100 test, Sat Nov 28 19:09:43 CET 2020 ===============
<7>[338387.973813] rsyslogd(13077): WRITE block 902544 on md9 (8 sectors)
<7>[338387.973993] kjournald(1867): WRITE block 18896 on md9 (8 sectors)
<7>[338387.974011] kjournald(1867): WRITE block 18904 on md9 (8 sectors)
<7>[338387.974206] kjournald(1867): WRITE block 18912 on md9 (8 sectors)

============= 29/100 test, Sat Nov 28 19:18:30 CET 2020 ===============
<7>[338686.893619] kjournald(1867): WRITE block 916368 on md9 (8 sectors)
<7>[338686.893677] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[338686.893677] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 30/100 test, Sat Nov 28 19:23:26 CET 2020 ===============
<7>[338952.943520] kjournald(1867): WRITE block 916568 on md9 (8 sectors)
<7>[338952.943566] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[338952.943566] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 31/100 test, Sat Nov 28 19:27:55 CET 2020 ===============
<7>[338979.883683] kjournald(1867): WRITE block 916584 on md9 (8 sectors)
<7>[338979.883736] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[338979.883736] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 32/100 test, Sat Nov 28 19:28:27 CET 2020 ===============
<7>[339070.168565] kworker/u8:0(21116): WRITE block 916656 on md9 (8 sectors)

============= 33/100 test, Sat Nov 28 19:29:48 CET 2020 ===============
<<<7>[339208.972354] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<<<7>[339208.972354] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 34/100 test, Sat Nov 28 19:32:22 CET 2020 ===============
<7>[339364.303347] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[339364.309038] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[339364.303347] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[339364.309038] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 35/100 test, Sat Nov 28 19:34:43 CET 2020 ===============
<7>[339882.393225] rsyslogd(13077): dirtied inode 21448 (kmsg) on md9
<7>[339882.393313] rsyslogd(13077): dirtied inode 21448 (kmsg) on md9
<7>[339882.393326] rsyslogd(13077): dirtied inode 21448 (kmsg) on md9
<<<<7>[339882.384089] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[339882.392550] kworker/u8:1(23044): WRITE block 786704 on md9 (8 sectors)
<7>[339882.392571] kworker/u8:1(23044): WRITE block 902536 on md9 (8 sectors)
<7>[339882.392590] kworker/u8:1(23044): WRITE block 8 on md9 (8 sectors)
<7>[339882.392666] rsyslogd(13077): WRITE block 917304 on md9 (8 sectors)
<7>[339882.393265] kjournald(1867): WRITE block 28920 on md9 (8 sectors)
<7>[339882.393284] kjournald(1867): WRITE block 28928 on md9 (8 sectors)
<7>[339882.393474] kjournald(1867): WRITE block 28936 on md9 (8 sectors)
<<<<7>[339882.384089] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 36/100 test, Sat Nov 28 19:43:20 CET 2020 ===============
<7>[340064.953318] kjournald(1867): WRITE block 919192 on md9 (8 sectors)
<7>[340064.953375] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[340064.953375] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 37/100 test, Sat Nov 28 19:46:22 CET 2020 ===============
<7>[340138.067758] kworker/u8:2(32054): WRITE block 8 on md9 (8 sectors)
<7>[340138.067790] kworker/u8:2(32054): WRITE block 786704 on md9 (8 sectors)

============= 38/100 test, Sat Nov 28 19:47:42 CET 2020 ===============
<7>[340199.253333] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[340199.253333] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 39/100 test, Sat Nov 28 19:48:37 CET 2020 ===============
<<7>[340231.014668] md<7>[340231.023454] kjournald(1867): WRITE block 31256 on md9 (8 sectors)
<7>[340231.023653] kjournald(1867): WRITE block 31264 on md9 (8 sectors)

============= 40/100 test, Sat Nov 28 19:49:11 CET 2020 ===============
<7>[340247.814132] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[340247.814132] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 41/100 test, Sat Nov 28 19:49:25 CET 2020 ===============
<7>[340609.223341] kworker/u8:2(32054): WRITE block 919592 on md9 (8 sectors)
<7>[340609.223467] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[340609.227566] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[340609.223467] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[340609.227566] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 42/100 test, Sat Nov 28 19:55:26 CET 2020 ===============
<7>[340850.373321] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[340850.373321] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 43/100 test, Sat Nov 28 19:59:27 CET 2020 ===============
<<7>[341898.853349] rsyslogd(13077): WRITE block 920440 on md9 (8 sectors)
<7>[341898.853557] kjournald(1867): WRITE block 9136 on md9 (8 sectors)
<7>[341898.853574] kjournald(1867): WRITE block 9144 on md9 (8 sectors)
<7>[341898.853776] kjournald(1867): WRITE block 9152 on md9 (8 sectors)

============= 44/100 test, Sat Nov 28 20:17:00 CET 2020 ===============
<7>[342036.565328] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[342036.565328] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 45/100 test, Sat Nov 28 20:19:19 CET 2020 ===============
<7>[342269.274583] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[342269.274583] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 46/100 test, Sat Nov 28 20:23:10 CET 2020 ===============
<7>[342689.478187] rsyslogd(13077): WRITE block 922008 on md9 (8 sectors)
<7>[342689.478387] kjournald(1867): WRITE block 14024 on md9 (8 sectors)
<7>[342689.478405] kjournald(1867): WRITE block 14032 on md9 (8 sectors)
<7>[342689.478608] kjournald(1867): WRITE block 14040 on md9 (8 sectors)

============= 47/100 test, Sat Nov 28 20:30:06 CET 2020 ===============
<7>[343035.463309] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[343035.466877] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[343035.463309] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[343035.466877] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 48/100 test, Sat Nov 28 20:35:58 CET 2020 ===============
<<<<<<<<<7>[343360.964917] kworker/u8:0(22675): WRITE block 8 on md9 (8 sectors)
<7>[343360.964947] kworker/u8:0(22675): WRITE block 786704 on md9 (8 sectors)
<7>[343360.965066] kworker/u8:0(22675): WRITE block 923792 on md9 (8 sectors)

============= 49/100 test, Sat Nov 28 20:41:22 CET 2020 ===============
<7>[344072.967147] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[344072.967147] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 50/100 test, Sat Nov 28 20:53:19 CET 2020 ===============
<7>[344439.063304] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[344439.063304] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 51/100 test, Sat Nov 28 20:59:20 CET 2020 ===============
<7>[344488.162796] setcfg(23664): dirtied inode 13502 (vg1_snapshot_reservation.lo~) on md9
<7>[344488.163041] setcfg(23664): dirtied inode 13510 (?) on md9

============= 52/100 test, Sat Nov 28 21:00:05 CET 2020 ===============
<7>[344727.073576] kjournald(1867): WRITE block 923688 on md9 (8 sectors)
<7>[344727.073646] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[344727.073646] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 53/100 test, Sat Nov 28 21:04:04 CET 2020 ===============
<<<7>[344834.973306] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9
<7>[344834.973317] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9

============= 54/100 test, Sat Nov 28 21:05:58 CET 2020 ===============
<7>[344960.973836] kjournald(1867): WRITE block 923856 on md9 (8 sectors)
<7>[344960.973883] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[344960.980612] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[344960.973883] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[344960.980612] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 55/100 test, Sat Nov 28 21:08:01 CET 2020 ===============
<7>[345020.303311] kworker/u8:2(28726): WRITE block 923504 on md9 (8 sectors)
<7>[345020.303359] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[345020.303359] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 56/100 test, Sat Nov 28 21:08:57 CET 2020 ===============
<7>[345045.393323] kworker/u8:1(22505): WRITE block 786704 on md9 (8 sectors)
<7>[345045.393385] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[345045.393385] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 57/100 test, Sat Nov 28 21:09:22 CET 2020 ===============
<7>[345415.013262] kjournald(1867): WRITE block 924144 on md9 (8 sectors)
<7>[345415.013302] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[345415.013302] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 58/100 test, Sat Nov 28 21:15:40 CET 2020 ===============
<7>[345497.013268] kjournald(1867): WRITE block 924200 on md9 (8 sectors)
<7>[345497.013308] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[345497.021812] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[345497.013308] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[345497.021812] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 59/100 test, Sat Nov 28 21:16:55 CET 2020 ===============
<7>[345870.134138] kjournald(1867): WRITE block 34848 on md9 (8 sectors)

============= 60/100 test, Sat Nov 28 21:23:07 CET 2020 ===============
<7>[345973.265233] Plex Script Hos(15821): dirtied inode 591673 (xmlrpclib.py) on dm-9
<7>[345973.265651] Plex Script Hos(15821): READ block 38155672 on dm-9 (72 sectors)

============= 61/100 test, Sat Nov 28 21:24:50 CET 2020 ===============
<7>[345975.166587] Plex Script Hos(15821): dirtied inode 590010 (movie_models.pym) on dm-9
<7>[345975.166221] Plex Script Hos(15821): READ block 38016168 on dm-9 (8 sectors)

============= 62/100 test, Sat Nov 28 21:24:52 CET 2020 ===============
<7>[346122.993314] kworker/u8:0(17556): WRITE block 923504 on md9 (8 sectors)
<7>[346122.993360] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[346122.993360] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 63/100 test, Sat Nov 28 21:27:20 CET 2020 ===============
<7>[346285.022074] setcfg(22460): dirtied inode 13545 (CACHEDEV1_DATA.log) on md9

============= 64/100 test, Sat Nov 28 21:30:02 CET 2020 ===============
<7>[346528.813889] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[346528.813889] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 65/100 test, Sat Nov 28 21:34:22 CET 2020 ===============
<7>[346652.943283] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[346652.943283] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 66/100 test, Sat Nov 28 21:36:15 CET 2020 ===============
<7>[346900.183288] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[346900.183288] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 67/100 test, Sat Nov 28 21:40:25 CET 2020 ===============
<<<<7>[347155.010480] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<<<<7>[347155.010480] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 68/100 test, Sat Nov 28 21:44:32 CET 2020 ===============
<7>[347636.663397] kworker/u8:1(4500): WRITE block 786704 on md9 (8 sectors)
<7>[347636.663449] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[347636.663449] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 69/100 test, Sat Nov 28 21:52:33 CET 2020 ===============
<<<7>[347787.1<<<<<<<<<<<<<<<<<<<<<<<<<7>[347797.133295] kworker/u8:0(17556): WRITE block 1000368 on md9 (8 sectors)

============= 70/100 test, Sat Nov 28 21:55:14 CET 2020 ===============
<<<<<<<<<<<<<<<<<<7>[348012.766531] kworker/u8:2(5438): WRITE block 786704 on md9 (8 sectors)

============= 71/100 test, Sat Nov 28 21:58:58 CET 2020 ===============
<<<7>[348072.193484] kjournald(1867): WRITE block 1000576 on md9 (8 sectors)
<7>[348072.193524] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[348072.193524] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 72/100 test, Sat Nov 28 21:59:49 CET 2020 ===============
<7>[348086.514589] setcfg(30928): dirtied inode 13510 (vg1.lo~) on md9

============= 73/100 test, Sat Nov 28 22:00:05 CET 2020 ===============
<<<7>[348392.054292] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[348392.062703] kjournald(1867): WRITE block 1000824 on md9 (8 sectors)
<7>[348392.062941] rsyslogd(13077): WRITE block 1000824 on md9 (8 sectors)
<7>[348392.063124] kjournald(1867): WRITE block 19256 on md9 (8 sectors)
<7>[348392.063141] kjournald(1867): WRITE block 19264 on md9 (8 sectors)
<7>[348392.063155] kjournald(1867): WRITE block 19272 on md9 (8 sectors)
<7>[348392.063170] kjournald(1867): WRITE block 19280 on md9 (8 sectors)
<7>[348392.063186] kjournald(1867): WRITE block 19288 on md9 (8 sectors)
<7>[348392.063427] kjournald(1867): WRITE block 19296 on md9 (8 sectors)
<<<7>[348392.054292] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 74/100 test, Sat Nov 28 22:05:09 CET 2020 ===============
<7>[348532.383636] kjournald(1867): WRITE block 1000920 on md9 (8 sectors)
<7>[348532.383705] md9_raid1(1854): WRITE block 1060216 on sda1 (1
<7>[348532.383705] md9_raid1(1854): WRITE block 1060216 on sda1 (1

============= 75/100 test, Sat Nov 28 22:07:29 CET 2020 ===============
<7>[348870.533325] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[348870.533325] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 76/100 test, Sat Nov 28 22:13:08 CET 2020 ===============
<7>[349421.946647] rsyslogd(13077): WRITE block 1002264 on md9 (8 sectors)
<7>[349421.946871] kjournald(1867): WRITE block 26008 on md9 (8 sectors)
<7>[349421.946890] kjournald(1867): WRITE block 26016 on md9 (8 sectors)
<7>[349421.946906] kjournald(1867): WRITE block 26024 on md9 (8 sectors)
<7>[349421.946924] kjournald(1867): WRITE block 26032 on md9 (8 sectors)
<7>[349421.946941] kjournald(1867): WRITE block 26040 on md9 (8 sectors)
<7>[349421.947216] kjournald(1867): WRITE block 26048 on md9 (8 sectors)

============= 77/100 test, Sat Nov 28 22:22:19 CET 2020 ===============
<<<<<<<<<<<<<<<<<<<<<<<<<7>[349438.431393] kjournald(1867): WRITE block 26248 on md9 (8 sectors)

============= 78/100 test, Sat Nov 28 22:22:35 CET 2020 ===============
<<7<<<<<<<<7>[349823.143485] kjournald(1867): WRITE block 913240 on md9 (8 sectors)
<7>[349823.143530] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[349823.150189] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[349823.143530] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[349823.150189] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 79/100 test, Sat Nov 28 22:29:09 CET 2020 ===============
<<7>[350027.983362] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<<7>[350027.983362] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 80/100 test, Sat Nov 28 22:32:25 CET 2020 ===============
<7>[351782.193546] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9
<7>[351782.193576] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9

============= 81/100 test, Sat Nov 28 23:01:39 CET 2020 ===============
<7>[351877.393498] kjournald(1867): WRITE block 917232 on md9 (8 sectors)
<7>[351877.393547] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[351877.393547] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 82/100 test, Sat Nov 28 23:03:14 CET 2020 ===============
<<<<<7>[351985.064325] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[351985.072722] kjournald(1867): WRITE block 917368 on md9 (8 sectors)
<7>[351985.072965] rsyslogd(13077): WRITE block 917368 on md9 (8 sectors)
<7>[351985.073155] kjournald(1867): WRITE block 10040 on md9 (8 sectors)
<7>[351985.073173] kjournald(1867): WRITE block 10048 on md9 (8 sectors)
<7>[351985.073185] kjournald(1867): WRITE block 10056 on md9 (8 sectors)
<7>[351985.073201] kjournald(1867): WRITE block 10064 on md9 (8 sectors)
<7>[351985.073250] kjournald(1867): WRITE block 10072 on md9 (8 sectors)
<7>[351985.073481] kjournald(1867): WRITE block 10080 on md9 (8 sectors)
<<<<<7>[351985.064325] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 83/100 test, Sat Nov 28 23:05:02 CET 2020 ===============
<7>[352305.387360] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[352305.387360] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 84/100 test, Sat Nov 28 23:10:27 CET 2020 ===============
<7>[352328.733382] kworker/u8:0(32667): WRITE block 786704 on md9 (8 sectors)
<7>[352328.733505] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[352328.736600] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[352328.733505] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[352328.736600] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 85/100 test, Sat Nov 28 23:10:45 CET 2020 ===============
<7>[352779.693316] kworker/u8:2(9757): WRITE block 919736 on md9 (8 sectors)
<7>[352779.693361] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[352779.693361] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 86/100 test, Sat Nov 28 23:18:22 CET 2020 ===============
<7>[353479.624664] rsyslogd(13077): WRITE block 920256 on md9 (8 sectors)
<7>[353479.624871] kjournald(1867): WRITE block 20040 on md9 (8 sectors)
<7>[353479.624890] kjournald(1867): WRITE block 20048 on md9 (8 sectors)
<7>[353479.625092] kjournald(1867): WRITE block 20056 on md9 (8 sectors)

============= 87/100 test, Sat Nov 28 23:30:01 CET 2020 ===============
<7>[353533.933577] kjournald(1867): WRITE block 920304 on md9 (8 sectors)
<7>[353533.933639] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[353533.933639] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 88/100 test, Sat Nov 28 23:30:56 CET 2020 ===============
<7>[353614.983287] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[353614.986625] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[353614.983287] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[353614.986625] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 89/100 test, Sat Nov 28 23:32:12 CET 2020 ===============
<7>[353997.463254] kworker/u8:1(10053): WRITE block 8 on md9 (8 sectors)
<7>[353997.463293] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[353997.463293] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 90/100 test, Sat Nov 28 23:38:34 CET 2020 ===============
<<7>[354127.803273] kworker/u8:1(10053): WRITE block 921864 on md9 (8 sectors)

============= 91/100 test, Sat Nov 28 23:40:49 CET 2020 ===============
<7>[354283.383318] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[354283.383318] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 92/100 test, Sat Nov 28 23:43:20 CET 2020 ===============
<7>[354421.001942] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[354421.001942] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

============= 93/100 test, Sat Nov 28 23:45:40 CET 2020 ===============
<<<<<7>[354699.143464] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9
<7>[354699.143492] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9

============= 94/100 test, Sat Nov 28 23:50:19 CET 2020 ===============
<7>[354997.416701] rsyslogd(13077): WRITE block 923864 on md9 (8 sectors)
<7>[354997.416720] kjournald(1867): WRITE block 30232 on md9 (8 sectors)
<7>[354997.416740] kjournald(1867): WRITE block 30240 on md9 (8 sectors)
<7>[354997.416958] kjournald(1867): WRITE block 30248 on md9 (8 sectors)

============= 95/100 test, Sat Nov 28 23:55:16 CET 2020 ===============
<<<<<<<7<<<<<<<<<<<<<<<<<<<<<<<<<<<<7>[355063.500466] kjournald(1867): WRITE block 30720 on md9 (8 sectors)

============= 96/100 test, Sat Nov 28 23:56:25 CET 2020 ===============
<7>[355338.913463] kjournald(1867): WRITE block 913144 on md9 (8 sectors)
<7>[355338.913510] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[355338.913510] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 97/100 test, Sun Nov 29 00:01:03 CET 2020 ===============
<<<7>[356488.717434] kworker/u8:0(4609): WRITE block 916600 on md9 (8 sectors)

============= 98/100 test, Sun Nov 29 00:20:06 CET 2020 ===============
<7>[356719.193397] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9

============= 99/100 test, Sun Nov 29 00:23:57 CET 2020 ===============
<7>[357006.025543] rsyslogd(13077): WRITE block 916960 on md9 (8 sectors)
<7>[357006.025804] kjournald(1867): WRITE block 11016 on md9 (8 sectors)
<7>[357006.025828] kjournald(1867): WRITE block 11024 on md9 (8 sectors)

Turn off block_dump
Start klogd.sh daemon "
-------------

Thanks for any help :)

Mousetick
Been there, done that
Posts: 946
Joined: Thu Aug 24, 2017 10:28 pm

Re: Ts-230 random spin up

Post by Mousetick » Mon Nov 30, 2020 1:13 am

The only thing that jumps out is the rsyslogd process which writes to disk every few minutes. Normally it's used to save kernel messages to a log file.

You can monitor the contents of the log file updating in real time with this command from SSH:

Code: Select all

# tail -f /mnt/HDA_ROOT/.logs/kmsg
Watch it for a while, and press Ctrl+C when you've had enough.

Normally there should be no new kernel messages once the system is operating, unless certain operations are performed or failures are occuring relating to disks and filesystems, or network, or memory, or hardware in general.

Alex223
New here
Posts: 7
Joined: Tue Jul 28, 2020 11:14 pm

Re: Ts-230 random spin up

Post by Alex223 » Thu Dec 03, 2020 2:42 am

Hi Moustick,

Thanks for your replay.

yesterday did a reboot of the NAS and let it run for a couple of hours with the command to read the log file, but nothing unusual showed up in the logs for hours.
Now I just ran the command again and it displayed these notification messages, but the times don't match with the "rsyslogd" log entries that write occasionally on the HDD

2020-12-02 03:00:49 +01:00 <4> [30564.987947] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 03:00:50 +01:00 <4> [30565.524726] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 03:00:54 +01:00 <4> [30569.975723] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 03:00:54 +01:00 <4> [30570.030997] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 03:01:19 +01:00 <4> [30595.115622] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 03:01:26 +01:00 <4> [30601.464862] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 03:01:26 +01:00 <4> [30601.531261] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 06:35:32 +01:00 <4> [43448.325723] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 12:35:32 +01:00 <4> [65048.374237] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended
2020-12-02 18:35:32 +01:00 <4> [86648.433273] EXT2-fs (mmcblk0p5): warning: maximal mount count reached, running e2fsck is recommended

Regards,

Mousetick
Been there, done that
Posts: 946
Joined: Thu Aug 24, 2017 10:28 pm

Re: Ts-230 random spin up

Post by Mousetick » Thu Dec 03, 2020 3:18 am

I don't know how you ran your test to be able to compare times, but the blkdevMonitor logs you posted initially showed many lines like this:
<7>[354699.143492] rsyslogd(13077): dirtied inode 21447 (kmsg) on md9

kmsg is the file you have been watching in the second test, which is stored on the HDA_ROOT "volume" (md9). So in my mind there are no other candidates.

If you want to compare times, you need to run 'tail -f' and blkdevMonitor simultaneously in parallel. Is this what you did, and what did blkdevMonitor show then?

Alex223
New here
Posts: 7
Joined: Tue Jul 28, 2020 11:14 pm

Re: Ts-230 random spin up

Post by Alex223 » Sat Dec 19, 2020 11:37 pm

Hi Mousetick,

I ran both commands simultaneously as you suggested and this is are the results that showed up in the SSH window,

Thank's
8)
You do not have the required permissions to view the files attached to this post.

Mousetick
Been there, done that
Posts: 946
Joined: Thu Aug 24, 2017 10:28 pm

Re: Ts-230 random spin up

Post by Mousetick » Sun Dec 20, 2020 5:02 am

Ok. Well, never mind, that's not helping.

The issue is that the kernel logging was changed in recent versions of QTS firmware, but the blkdevMonitor script hasn't been updated by QNAP to account for the new kernel logging mechanism. blkdevMonitor is outdated.

What's happening is that blkdevMonitor produces kernel messages that are then logged to the log file on disk, which produces new kernel messages, and so on and so forth. So the blkdevMonitor is self-defeating, it's almost useless.

What you can try next if you want, is to run blkdevMonitor again, making absolutely sure the NAS is not being used for about 7 to 8 hours while the script is running - like during the night when you're sleeping. And then look at the results, ignoring all the WRITE messages attributed to rsyslogd, which are a red herring, to maybe find a real culprit.

Notice in the last results your posted, at the 9/100 test: the process 'smbd' wrote to a file or directory named 'Desktop' on device dm-9. Which means you had a Windows client connected to the NAS while the test was running. Two remarks about this:
- For the disk activity monitoring test to be meaningful, you need to ensure that nothing external to the NAS is using the NAS.
- Nothing in this case really must be nothing, neither yourself actively using the NAS via the web or Windows shares or whatever, nor any other device or computer that might be accessing the NAS on its own. With one exception: the ssh connection (with putty) to run the blkdevMonitor script, and nothing else.

For example, if you map a NAS share to a drive letter in Windows, that counts as using the NAS - even if you're not doing anything with the mapped drive. Windows Explorer will access the share at random times and will wake up the disk if it's sleeping.

You will never be able to let your NAS disk sleep unless you can guarantee that nothing outside of it, human or machine, is using it.

Alex223
New here
Posts: 7
Joined: Tue Jul 28, 2020 11:14 pm

Re: Ts-230 random spin up

Post by Alex223 » Sun Dec 20, 2020 6:56 am

Hi again,

I understand, so basically the diagnostic script blkdevMonitor causes the write messages "rsyslogd", so it means it writes data to the hard disk and prevents it from sleeping.

If I filter the rsyslogd entry from my first post the only results that shown repeated are:

============= 87/100 test, Sat Nov 28 23:30:01 CET 2020 ===============
<7>[353533.933577] kjournald(1867): WRITE block 920304 on md9 (8 sectors)
<7>[353533.933639] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[353533.933639] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 89/100 test, Sat Nov 28 23:32:12 CET 2020 ===============
<7>[353997.463254] kworker/u8:1(10053): WRITE block 8 on md9 (8 sectors)
<7>[353997.463293] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)
<7>[353997.463293] md9_raid1(1854): WRITE block 1060216 on sda1 (1 sectors)

============= 72/100 test, Sat Nov 28 21:59:49 CET 2020 ===============
<7>[348086.514589] setcfg(30928): dirtied inode 13510 (vg1.lo~) on md9

============= 70/100 test, Sat Nov 28 21:55:14 CET 2020 ===============
<<<<<<<<<<<<<<<<<<7>[348012.766531] kworker/u8:2(5438): WRITE block 786704 on md9 (8 sectors)

============= 65/100 test, Sat Nov 28 21:34:22 CET 2020 ===============
<7>[346652.943283] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)
<7>[346652.943283] md9_raid1(1854): WRITE block 1060232 on sda1 (1 sectors)

These are log messages that "need" to be stored on the hard disk, or/and they are "tools/apps" itself that write something on the HDD?

I updated recently the firmware to the new version 4.5.1495

Thanks :geek:

Mousetick
Been there, done that
Posts: 946
Joined: Thu Aug 24, 2017 10:28 pm

Re: Ts-230 random spin up

Post by Mousetick » Sun Dec 20, 2020 6:27 pm

Alex223 wrote:
Sun Dec 20, 2020 6:56 am
I understand, so basically the diagnostic script blkdevMonitor causes the write messages "rsyslogd", so it means it writes data to the hard disk and prevents it from sleeping.
Correct.
If I filter the rsyslogd entry from my first post the only results that shown repeated are:
[...]
These are log messages that "need" to be stored on the hard disk, or/and they are "tools/apps" itself that write something on the HDD?
This is mostly "noise" generated as a byproduct of rsyslogd writing to disk in the first place. These are messages from the Linux kernel handling the updates to the filesystem and the underlying RAID device, as a result of rsyslogd's saving logs to disk.

If we ignore rsyslogd, kjournald, kworker, md9_raid1, and kworker, this is what's left of the test results in your OP:

Code: Select all

============= 0/100 test, Sat Nov 28 17:06:29 CET 2020 ===============
============= 1/100 test, Sat Nov 28 17:15:52 CET 2020 ===============
============= 2/100 test, Sat Nov 28 17:19:33 CET 2020 ===============
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<7>[332377.016421] logrotate(32318): WRITE block 916072 on md9 (8 sectors)
============= 3/100 test, Sat Nov 28 17:38:14 CET 2020 ===============
============= 4/100 test, Sat Nov 28 17:40:35 CET 2020 ===============
============= 5/100 test, Sat Nov 28 17:42:34 CET 2020 ===============
============= 6/100 test, Sat Nov 28 17:52:11 CET 2020 ===============
============= 7/100 test, Sat Nov 28 18:02:57 CET 2020 ===============
============= 8/100 test, Sat Nov 28 18:05:13 CET 2020 ===============
============= 9/100 test, Sat Nov 28 18:10:55 CET 2020 ===============
============= 10/100 test, Sat Nov 28 18:12:00 CET 2020 ===============
============= 11/100 test, Sat Nov 28 18:18:27 CET 2020 ===============
============= 12/100 test, Sat Nov 28 18:20:21 CET 2020 ===============
============= 13/100 test, Sat Nov 28 18:25:07 CET 2020 ===============
============= 14/100 test, Sat Nov 28 18:25:59 CET 2020 ===============
============= 15/100 test, Sat Nov 28 18:28:01 CET 2020 ===============
============= 16/100 test, Sat Nov 28 18:34:41 CET 2020 ===============
============= 17/100 test, Sat Nov 28 18:36:13 CET 2020 ===============
============= 18/100 test, Sat Nov 28 18:40:16 CET 2020 ===============
============= 19/100 test, Sat Nov 28 18:41:25 CET 2020 ===============
============= 20/100 test, Sat Nov 28 18:42:55 CET 2020 ===============
============= 21/100 test, Sat Nov 28 18:42:57 CET 2020 ===============
============= 22/100 test, Sat Nov 28 18:45:45 CET 2020 ===============
============= 23/100 test, Sat Nov 28 18:50:43 CET 2020 ===============
============= 24/100 test, Sat Nov 28 18:50:55 CET 2020 ===============
============= 25/100 test, Sat Nov 28 18:54:11 CET 2020 ===============
============= 26/100 test, Sat Nov 28 19:01:23 CET 2020 ===============
============= 27/100 test, Sat Nov 28 19:07:40 CET 2020 ===============
============= 28/100 test, Sat Nov 28 19:09:43 CET 2020 ===============
============= 29/100 test, Sat Nov 28 19:18:30 CET 2020 ===============
============= 30/100 test, Sat Nov 28 19:23:26 CET 2020 ===============
============= 31/100 test, Sat Nov 28 19:27:55 CET 2020 ===============
============= 32/100 test, Sat Nov 28 19:28:27 CET 2020 ===============
============= 33/100 test, Sat Nov 28 19:29:48 CET 2020 ===============
============= 34/100 test, Sat Nov 28 19:32:22 CET 2020 ===============
============= 35/100 test, Sat Nov 28 19:34:43 CET 2020 ===============
============= 36/100 test, Sat Nov 28 19:43:20 CET 2020 ===============
============= 37/100 test, Sat Nov 28 19:46:22 CET 2020 ===============
============= 38/100 test, Sat Nov 28 19:47:42 CET 2020 ===============
============= 39/100 test, Sat Nov 28 19:48:37 CET 2020 ===============
============= 40/100 test, Sat Nov 28 19:49:11 CET 2020 ===============
============= 41/100 test, Sat Nov 28 19:49:25 CET 2020 ===============
============= 42/100 test, Sat Nov 28 19:55:26 CET 2020 ===============
============= 43/100 test, Sat Nov 28 19:59:27 CET 2020 ===============
============= 44/100 test, Sat Nov 28 20:17:00 CET 2020 ===============
============= 45/100 test, Sat Nov 28 20:19:19 CET 2020 ===============
============= 46/100 test, Sat Nov 28 20:23:10 CET 2020 ===============
============= 47/100 test, Sat Nov 28 20:30:06 CET 2020 ===============
============= 48/100 test, Sat Nov 28 20:35:58 CET 2020 ===============
============= 49/100 test, Sat Nov 28 20:41:22 CET 2020 ===============
============= 50/100 test, Sat Nov 28 20:53:19 CET 2020 ===============
============= 51/100 test, Sat Nov 28 20:59:20 CET 2020 ===============
<7>[344488.162796] setcfg(23664): dirtied inode 13502 (vg1_snapshot_reservation.lo~) on md9
<7>[344488.163041] setcfg(23664): dirtied inode 13510 (?) on md9
============= 52/100 test, Sat Nov 28 21:00:05 CET 2020 ===============
============= 53/100 test, Sat Nov 28 21:04:04 CET 2020 ===============
============= 54/100 test, Sat Nov 28 21:05:58 CET 2020 ===============
============= 55/100 test, Sat Nov 28 21:08:01 CET 2020 ===============
============= 56/100 test, Sat Nov 28 21:08:57 CET 2020 ===============
============= 57/100 test, Sat Nov 28 21:09:22 CET 2020 ===============
============= 58/100 test, Sat Nov 28 21:15:40 CET 2020 ===============
============= 59/100 test, Sat Nov 28 21:16:55 CET 2020 ===============
============= 60/100 test, Sat Nov 28 21:23:07 CET 2020 ===============
<7>[345973.265233] Plex Script Hos(15821): dirtied inode 591673 (xmlrpclib.py) on dm-9
<7>[345973.265651] Plex Script Hos(15821): READ block 38155672 on dm-9 (72 sectors)
============= 61/100 test, Sat Nov 28 21:24:50 CET 2020 ===============
<7>[345975.166587] Plex Script Hos(15821): dirtied inode 590010 (movie_models.pym) on dm-9
<7>[345975.166221] Plex Script Hos(15821): READ block 38016168 on dm-9 (8 sectors)
============= 62/100 test, Sat Nov 28 21:24:52 CET 2020 ===============
============= 63/100 test, Sat Nov 28 21:27:20 CET 2020 ===============
<7>[346285.022074] setcfg(22460): dirtied inode 13545 (CACHEDEV1_DATA.log) on md9
============= 64/100 test, Sat Nov 28 21:30:02 CET 2020 ===============
============= 65/100 test, Sat Nov 28 21:34:22 CET 2020 ===============
============= 66/100 test, Sat Nov 28 21:36:15 CET 2020 ===============
============= 67/100 test, Sat Nov 28 21:40:25 CET 2020 ===============
============= 68/100 test, Sat Nov 28 21:44:32 CET 2020 ===============
============= 69/100 test, Sat Nov 28 21:52:33 CET 2020 ===============
============= 70/100 test, Sat Nov 28 21:55:14 CET 2020 ===============
============= 71/100 test, Sat Nov 28 21:58:58 CET 2020 ===============
============= 72/100 test, Sat Nov 28 21:59:49 CET 2020 ===============
<7>[348086.514589] setcfg(30928): dirtied inode 13510 (vg1.lo~) on md9
============= 73/100 test, Sat Nov 28 22:00:05 CET 2020 ===============
============= 74/100 test, Sat Nov 28 22:05:09 CET 2020 ===============
============= 75/100 test, Sat Nov 28 22:07:29 CET 2020 ===============
============= 76/100 test, Sat Nov 28 22:13:08 CET 2020 ===============
============= 77/100 test, Sat Nov 28 22:22:19 CET 2020 ===============
============= 78/100 test, Sat Nov 28 22:22:35 CET 2020 ===============
============= 79/100 test, Sat Nov 28 22:29:09 CET 2020 ===============
============= 80/100 test, Sat Nov 28 22:32:25 CET 2020 ===============
============= 81/100 test, Sat Nov 28 23:01:39 CET 2020 ===============
============= 82/100 test, Sat Nov 28 23:03:14 CET 2020 ===============
============= 83/100 test, Sat Nov 28 23:05:02 CET 2020 ===============
============= 84/100 test, Sat Nov 28 23:10:27 CET 2020 ===============
============= 85/100 test, Sat Nov 28 23:10:45 CET 2020 ===============
============= 86/100 test, Sat Nov 28 23:18:22 CET 2020 ===============
============= 87/100 test, Sat Nov 28 23:30:01 CET 2020 ===============
============= 88/100 test, Sat Nov 28 23:30:56 CET 2020 ===============
============= 89/100 test, Sat Nov 28 23:32:12 CET 2020 ===============
============= 90/100 test, Sat Nov 28 23:38:34 CET 2020 ===============
============= 91/100 test, Sat Nov 28 23:40:49 CET 2020 ===============
============= 92/100 test, Sat Nov 28 23:43:20 CET 2020 ===============
============= 93/100 test, Sat Nov 28 23:45:40 CET 2020 ===============
============= 94/100 test, Sat Nov 28 23:50:19 CET 2020 ===============
============= 95/100 test, Sat Nov 28 23:55:16 CET 2020 ===============
============= 96/100 test, Sat Nov 28 23:56:25 CET 2020 ===============
============= 97/100 test, Sun Nov 29 00:01:03 CET 2020 ===============
============= 98/100 test, Sun Nov 29 00:20:06 CET 2020 ===============
============= 99/100 test, Sun Nov 29 00:23:57 CET 2020 ===============
So basically nothing noteworthy. In your OP you said "I notice my Nas hdd spin up/down constantly". In contrast to what you claimed, the results above show hardly any disk activity from 17:06:29 to 00:23:57. From my point of view, I'd conclude that your NAS is not misbehaving and that any unexpected disk access is entirely caused by extrernal factors - either your own actions or the actions of other devices and software connected to the NAS, which you may not be aware of (such as mapped shares in WIndows as previously mentioned).

Alex223
New here
Posts: 7
Joined: Tue Jul 28, 2020 11:14 pm

Re: Ts-230 random spin up

Post by Alex223 » Mon Dec 21, 2020 3:00 am

Hi Mousetick,

Thanks for the deep investigation,

If I theoretically going to disable the Samba service on my nas it should be sleeping completely (if the snapshot doesn't write / the plex server / setcfg )?

About the logs that are saved on the HDD, I notice in the new firmware there is the QuLog centre, is this the applications that are responsible to save the log files or is there another application who does it?
I attached a screenshot that shows there is the possibility to change the destination path, but as far as I can see you can only select the local harddisk, maybe is there a way to save it a different path, like a USB Pendrive/cloud to avoid writing log activity on the hard disk?

:ubergeek:
You do not have the required permissions to view the files attached to this post.

Mousetick
Been there, done that
Posts: 946
Joined: Thu Aug 24, 2017 10:28 pm

Re: Ts-230 random spin up

Post by Mousetick » Tue Dec 22, 2020 3:43 am

Alex223 wrote:
Mon Dec 21, 2020 3:00 am
If I theoretically going to disable the Samba service on my nas it should be sleeping completely (if the snapshot doesn't write / the plex server / setcfg )?
Perhaps, I don't know. You and your devices are the ones using the NAS, not me. I can't tell you. But disabling Samba would mean that you can't access its shared folders any more, which seems rather... counter-productive?
About the logs that are saved on the HDD, I notice in the new firmware there is the QuLog centre, is this the applications that are responsible to save the log files or is there another application who does it?
Forget about the logs. We've determined that this was a red herring, a false positive. The test script was generating noise that I misinterpreted.

This conversation doesn't seem to be going anywhere, and I'm not sure I can contribute anything more at this point, other than this summary:
- You ran tests showing that the NAS is not "constantly" generating disk activity on its own that would trigger disk wake-ups "constantly".
- If you want your disk to sleep and not wake up for a while, make sure nothing is using the NAS for a while.
- If you absolutely cannot stand to have the disk wake up from time to time, shut down the NAS (there is scheduled start/stop feature in Control Panel > System > Power > Power Schedule which may be available for your NAS model).

User avatar
graemev
Getting the hang of things
Posts: 95
Joined: Sun Feb 12, 2012 10:17 pm

Re: Ts-230 random spin up

Post by graemev » Mon Apr 19, 2021 6:43 am

I installed a UPS (been on a street generator for 1 month) so I looked at my system (inc QNAP) power usage. Spent a day trying to figure out why I was wasting 20w spinning disks (it used to spin down) after absolute ages , with RJ45 unplugged and most things stopped I finally killed rsyslog , via a console directly , not over ssh. The activity noise stopped instantly and about 30 mins later the disk spun down and the status light went off ...After re-plugging everything looked at web interface to find a more long-term solution. I already do remote syslog to another Debian box running rsyslogd(8) but I note we now have (as you say) qulog. It has: /mnt/ext/opt/QuLog/sbin/rsyslogd . Looks like it only logs to local disk. There is an option to send to a remote qulog server but that looks like a rfc5424 rsyslog sever (defaults to TLS, but can be switch back to TCP ...but warning )

Post Reply

Return to “HDD Spin Down (HDD Standby)”