Any chance at all of getting HDD standby with QTS 4.3?

Discussion about hard drive spin down (standby) feature of NAS.
Post Reply
JimSwe
Starting out
Posts: 14
Joined: Sun Nov 13, 2016 6:22 pm

Any chance at all of getting HDD standby with QTS 4.3?

Post by JimSwe »

I had HDD spin down working in 4.2, but after upgrading my TAS-268 to 4.3 I can not get the HDD's to spin down.

Using the diagnostic script, it's quiet for up to 10 minutes, but then there's a burst of activity:

Code: Select all

============= 7/100 test, Fri May 19 00:00:09 CEST 2017 ===============
5855): WRITE block 56710664 on dm-0 (8 sectors)
<7>[142628.793940] kworker/u4:1(15855): WRITE block 56711048 on dm-0 (56 sectors)
<7>[142628.794241] kworker/u4:1(15855): WRITE block 2609357456 on dm-0 (16 sectors)
<7>[142628.794289] kworker/u4:1(15855): WRITE block 56713768 on dm-0 (8 sectors)
<7>[142628.794334] kworker/u4:1(15855): WRITE block 56713856 on dm-0 (8 sectors)
<7>[142628.794380] kworker/u4:1(15855): WRITE block 56718960 on dm-0 (8 sectors)
<7>[142628.794437] kworker/u4:1(15855): WRITE block 56726944 on dm-0 (16 sectors)
<7>[142628.794481] kworker/u4:1(15855): WRITE block 56728104 on dm-0 (8 sectors)
<7>[142628.794709] kworker/u4:1(15855): WRITE block 2609357368 on dm-0 (8 sectors)
<7>[142628.794754] kworker/u4:1(15855): WRITE block 2609254952 on dm-0 (8 sectors)
<7>[142628.794884] kworker/u4:1(15855): WRITE block 2609357536 on dm-0 (8 sectors)
<7>[142628.794937] kworker/u4:1(15855): WRITE block 56715704 on dm-0 (16 sectors)
<7>[142628.795066] kworker/u4:1(15855): WRITE block 2609357376 on dm-0 (16 sectors)
<7>[142628.795111] kworker/u4:1(15855): WRITE block 2609255056 on dm-0 (8 sectors)
<7>[142633.247589] jbd2/dm-0-8(3578): WRITE block 3854958568 on dm-0 (8 sectors)
<7>[142633.290528] jbd2/dm-0-8(3578): WRITE block 3854958576 on dm-0 (8 sectors)
<7>[142633.290579] jbd2/dm-0-8(3578): WRITE block 3854958584 on dm-0 (8 sectors)
<7>[142633.292887] jbd2/dm-0-8(3578): WRITE block 3854958592 on dm-0 (8 sectors)
<7>[142644.236502] jbd2/dm-0-8(3578): WRITE block 3854958600 on dm-0 (8 sectors)
<7>[142644.277496] jbd2/dm-0-8(3578): WRITE block 3854958608 on dm-0 (8 sectors)
<7>[142644.277537] jbd2/dm-0-8(3578): WRITE block 3854958616 on dm-0 (8 sectors)
<7>[142644.278062] jbd2/dm-0-8(3578): WRITE block 3854958624 on dm-0 (8 sectors)
<7>[142630.779961] kworker/u4:1(15855): WRITE block 950800 on md9 (8 sectors)
<7>[142630.780025] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142630.780152] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142631.293896] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142631.293991] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142631.341528] kworker/u4:1(15855): WRITE block 950808 on md9 (8 sectors)
<7>[142631.341579] kworker/u4:1(15855): WRITE block 950816 on md9 (8 sectors)
<7>[142631.341618] kworker/u4:1(15855): WRITE block 950824 on md9 (8 sectors)
<7>[142631.341651] kworker/u4:1(15855): WRITE block 950832 on md9 (8 sectors)
<7>[142631.341691] kworker/u4:1(15855): WRITE block 950840 on md9 (8 sectors)
<7>[142631.341731] kworker/u4:1(15855): WRITE block 950848 on md9 (8 sectors)
<7>[142631.342006] kworker/u4:1(15855): WRITE block 819376 on md9 (8 sectors)
<7>[142631.342050] kworker/u4:1(15855): WRITE block 819384 on md9 (8 sectors)
<7>[142631.342088] kworker/u4:1(15855): WRITE block 819520 on md9 (8 sectors)
<7>[142631.342125] kworker/u4:1(15855): WRITE block 819528 on md9 (8 sectors)
<7>[142631.342167] kworker/u4:1(15855): WRITE block 819536 on md9 (8 sectors)
<7>[142631.342196] kworker/u4:1(15855): WRITE block 819544 on md9 (8 sectors)
<7>[142631.342232] kworker/u4:1(15855): WRITE block 819552 on md9 (8 sectors)
<7>[142631.342277] kworker/u4:1(15855): WRITE block 819560 on md9 (8 sectors)
<7>[142631.342310] kworker/u4:1(15855): WRITE block 819568 on md9 (8 sectors)
<7>[142631.342338] kworker/u4:1(15855): WRITE block 819576 on md9 (8 sectors)
<7>[142631.342366] kworker/u4:1(15855): WRITE block 819584 on md9 (8 sectors)
<7>[142631.342404] kworker/u4:1(15855): WRITE block 819592 on md9 (8 sectors)
<7>[142631.342444] kworker/u4:1(15855): WRITE block 819608 on md9 (8 sectors)
<7>[142631.342490] kworker/u4:1(15855): WRITE block 819616 on md9 (8 sectors)
<7>[142631.342527] kworker/u4:1(15855): WRITE block 819624 on md9 (8 sectors)
<7>[142631.342561] kworker/u4:1(15855): WRITE block 819632 on md9 (8 sectors)
<7>[142631.342590] kworker/u4:1(15855): WRITE block 819640 on md9 (8 sectors)
<7>[142631.342621] kworker/u4:1(15855): WRITE block 819648 on md9 (8 sectors)
<7>[142631.342649] kworker/u4:1(15855): WRITE block 819656 on md9 (8 sectors)
<7>[142631.342679] kworker/u4:1(15855): WRITE block 819664 on md9 (8 sectors)
<7>[142631.549222] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142631.549414] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142631.581788] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142631.581901] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142635.205575] kjournald(364): WRITE block 951024 on md9 (8 sectors)
<7>[142635.205636] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142635.205754] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142635.649691] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142635.649823] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142635.668204] kjournald(364): WRITE block 951032 on md9 (8 sectors)
<7>[142635.668239] kjournald(364): WRITE block 951040 on md9 (8 sectors)
<7>[142635.668261] kjournald(364): WRITE block 951048 on md9 (8 sectors)
<7>[142635.668293] kjournald(364): WRITE block 951056 on md9 (8 sectors)
<7>[142635.668317] kjournald(364): WRITE block 951064 on md9 (8 sectors)
<7>[142635.668340] kjournald(364): WRITE block 951072 on md9 (8 sectors)
<7>[142635.668364] kjournald(364): WRITE block 951080 on md9 (8 sectors)
<7>[142635.668389] kjournald(364): WRITE block 951088 on md9 (8 sectors)
<7>[142635.668412] kjournald(364): WRITE block 951096 on md9 (8 sectors)
<7>[142635.668435] kjournald(364): WRITE block 951104 on md9 (8 sectors)
<7>[142635.668457] kjournald(364): WRITE block 951112 on md9 (8 sectors)
<7>[142635.668481] kjournald(364): WRITE block 951128 on md9 (8 sectors)
<7>[142635.668504] kjournald(364): WRITE block 951136 on md9 (8 sectors)
<7>[142635.668527] kjournald(364): WRITE block 951144 on md9 (8 sectors)
<7>[142635.668550] kjournald(364): WRITE block 951152 on md9 (8 sectors)
<7>[142635.668573] kjournald(364): WRITE block 951160 on md9 (8 sectors)
<7>[142635.668596] kjournald(364): WRITE block 951168 on md9 (8 sectors)
<7>[142635.668619] kjournald(364): WRITE block 951176 on md9 (8 sectors)
<7>[142635.668648] kjournald(364): WRITE block 951184 on md9 (8 sectors)
<7>[142635.668671] kjournald(364): WRITE block 951704 on md9 (8 sectors)
<7>[142635.668695] kjournald(364): WRITE block 951712 on md9 (8 sectors)
<7>[142635.668718] kjournald(364): WRITE block 951720 on md9 (8 sectors)
<7>[142635.668747] kjournald(364): WRITE block 951728 on md9 (8 sectors)
<7>[142635.668771] kjournald(364): WRITE block 951736 on md9 (8 sectors)
<7>[142635.668795] kjournald(364): WRITE block 951744 on md9 (8 sectors)
<7>[142635.668818] kjournald(364): WRITE block 951752 on md9 (8 sectors)
<7>[142635.668841] kjournald(364): WRITE block 951760 on md9 (8 sectors)
<7>[142635.668864] kjournald(364): WRITE block 951768 on md9 (8 sectors)
<7>[142635.668888] kjournald(364): WRITE block 951776 on md9 (8 sectors)
<7>[142635.668914] kjournald(364): WRITE block 951784 on md9 (8 sectors)
<7>[142635.668937] kjournald(364): WRITE block 951792 on md9 (8 sectors)
<7>[142635.668960] kjournald(364): WRITE block 951808 on md9 (8 sectors)
<7>[142635.668994] kjournald(364): WRITE block 951816 on md9 (8 sectors)
<7>[142635.669017] kjournald(364): WRITE block 951824 on md9 (8 sectors)
<7>[142635.669040] kjournald(364): WRITE block 951832 on md9 (8 sectors)
<7>[142635.669063] kjournald(364): WRITE block 951840 on md9 (8 sectors)
<7>[142635.669086] kjournald(364): WRITE block 951848 on md9 (8 sectors)
<7>[142635.669110] kjournald(364): WRITE block 951856 on md9 (8 sectors)
<7>[142635.669133] kjournald(364): WRITE block 951864 on md9 (8 sectors)
<7>[142635.670822] kjournald(364): WRITE block 525536 on md9 (8 sectors)
<7>[142635.670936] kjournald(364): WRITE block 525544 on md9 (8 sectors)
<7>[142635.670964] kjournald(364): WRITE block 525552 on md9 (8 sectors)
<7>[142635.670996] kjournald(364): WRITE block 525560 on md9 (8 sectors)
<7>[142635.671028] kjournald(364): WRITE block 525568 on md9 (8 sectors)
<7>[142635.671060] kjournald(364): WRITE block 525576 on md9 (8 sectors)
<7>[142635.671091] kjournald(364): WRITE block 525584 on md9 (8 sectors)
<7>[142635.671120] kjournald(364): WRITE block 525592 on md9 (8 sectors)
<7>[142635.671153] kjournald(364): WRITE block 525600 on md9 (8 sectors)
<7>[142635.671188] kjournald(364): WRITE block 525608 on md9 (8 sectors)
<7>[142635.671220] kjournald(364): WRITE block 525616 on md9 (8 sectors)
<7>[142635.671245] kjournald(364): WRITE block 525624 on md9 (8 sectors)
<7>[142635.671269] kjournald(364): WRITE block 525632 on md9 (8 sectors)
<7>[142635.672029] kjournald(364): WRITE block 525640 on md9 (8 sectors)
<7>[142635.904834] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142635.904970] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142635.928507] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142635.928599] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142646.424232] kworker/u4:1(15855): WRITE block 786872 on md9 (8 sectors)
<7>[142646.424286] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142646.424394] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142646.754464] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142646.754536] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142646.775383] kworker/u4:1(15855): WRITE block 786880 on md9 (8 sectors)
<7>[142646.775417] kworker/u4:1(15855): WRITE block 786920 on md9 (8 sectors)
<7>[142646.775443] kworker/u4:1(15855): WRITE block 849536 on md9 (8 sectors)
<7>[142646.775472] kworker/u4:1(15855): WRITE block 951120 on md9 (8 sectors)
<7>[142646.775499] kworker/u4:1(15855): WRITE block 951800 on md9 (8 sectors)
<7>[142646.775537] kworker/u4:1(15855): WRITE block 0 on md9 (8
<7>[142629.001698] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142633.247695] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142633.497221] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142644.236581] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142644.486147] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142629.001698] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142630.780025] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142631.293896] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142631.549222] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142631.581788] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142633.247695] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142633.497221] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142635.205636] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142635.649691] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142635.904834] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142635.928507] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142644.236581] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142644.486147] md1_raid1(2749): WRITE block 7794127504 on sda3 (1 sectors)
<7>[142646.424286] md9_raid1(329): WRITE block 1060216 on sda1 (1 sectors)
<7>[142646.754464] md9_raid1(329): WRITE block 1060232 on sda1 (1 sectors)
<7>[142630.780152] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142631.293991] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142631.549414] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142631.581901] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142635.205754] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142635.649823] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142635.904970] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142635.928599] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
<7>[142646.424394] md9_raid1(329): WRITE block 1060216 on sdb1 (1 sectors)
<7>[142646.754536] md9_raid1(329): WRITE block 1060232 on sdb1 (1 sectors)
Then it can be quiet for another 10-15 minutes.

Things I've done:
Disabled almost all the apps like Photo station, Video station etc.
These are active: Resource Monitor 1.0.0 (cannot turn off)
QTS SSL Cert
Cloud Drive Sync
Python 2.7.3
Backup Versioning

-Cloud Drive Sync is set to only run at a specific time, and then stop.
-RTRR is also set for specific times
-Clock NTP sync is set to once every 7 days
-System connections log: Clicked "Stop logging"
-UPNP and Bonjour turned off
-SMART Polling: Every 360 minutes
-Media library, DLNA Server, Itunes, SQL, Android Station etc. are all turned off
-No other computers or devices connected
-Cloud link etc. disabled

Like I said, I had no problems with HDD standby on 4.2, even with many more apps and functions enabled. Of course it would spin up occasionally for some background job, but for example during the night it would mostly stay in standby.

It's not a huge deal, but it seems unnecessary to leave the disks spinning for the 18 hours a day that I don't need them.
User avatar
Toxic17
Ask me anything
Posts: 6469
Joined: Tue Jan 25, 2011 11:41 pm
Location: Planet Earth
Contact:

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by Toxic17 »

The only thing I could suggest is disable everything to start with. check it then spins down the disks, then start enabling services and apps, until you find the culprit.

can you confirm you have contacted QNAP Support and report your issue?
Regards Simon

Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following


NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
fbouchez
First post
Posts: 1
Joined: Sun May 21, 2017 3:01 pm

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by fbouchez »

Hi,
Had a similar problem.

Even if there was a sync job at a specific time, Cloud Drive Sync was frequently writing logs in :
/share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/log/0/ENGINE/cloudconnector-debug.log

I disabled the log writing by editing /share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/conf/cc-log.conf
by commenting those line :

#[handler_filehandler_debug]
#class=qnap.common.debuglogutils.ModuleRotatingFileHandler
#level=DEBUG
#formatter=long
# backupCount=0 will results rollover never occurs, bug?
#args=('$BASE$/log/$USER_ID$/$JOB_ID$/cloudconnector-debug.log', 'a', 1500000, 1, '^(qnap.*)|(cc)$')

You have to reboot the nas to make those changes applied.

HDD now goes in standby

Fabien
http://www.smartpositive.com
Moondiver
Starting out
Posts: 28
Joined: Thu Jul 28, 2011 7:43 pm
Location: Germany

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by Moondiver »

I have the problem, that my HDDs dont reach Standby when I use Hybrid Backup Sync. Disabling the app was the only solution that helps.
Does you fix also help in my case?
JimSwe
Starting out
Posts: 14
Joined: Sun Nov 13, 2016 6:22 pm

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by JimSwe »

fbouchez wrote:Hi,
Had a similar problem.

Even if there was a sync job at a specific time, Cloud Drive Sync was frequently writing logs in :
/share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/log/0/ENGINE/cloudconnector-debug.log

I disabled the log writing by editing /share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/conf/cc-log.conf
by commenting those line :

#[handler_filehandler_debug]
#class=qnap.common.debuglogutils.ModuleRotatingFileHandler
#level=DEBUG
#formatter=long
# backupCount=0 will results rollover never occurs, bug?
#args=('$BASE$/log/$USER_ID$/$JOB_ID$/cloudconnector-debug.log', 'a', 1500000, 1, '^(qnap.*)|(cc)$')

You have to reboot the nas to make those changes applied.

HDD now goes in standby

Fabien
http://www.smartpositive.com
Thanks, I'll give that a try. Cloud Sync is one of the few services I didn't try to disable, because I need it to run every night so any changes are synced to my Dropbox the next morning.

Seems like a mistake by QNAP to leave debug logging on.

EDIT

OK So I didn't have that folder, however I did have a cc-log.conf file under /share/CACHEDEV1_DATA/.qpkg/CloudDriveSync/conf, with that exact set of lines.
I tried to comment them out and rebooted, but this resulted in the Cloud Drive Sync GUI not working (endless "Loading, Please Wait").

Instead of commenting out the section, I changed level=DEBUG to level=INFO, and this didn't break the Sync GUI. However I'm not sure that's enough to fix the HDD spin down problem.

I do have a bunch of LOG files under /share/CACHEDEV1_DATA/.qpkg/CloudDriveSync/log, and I noticed the "Last Modified" time stamp stopped updating every other minute on cloudconnector-debug.log after making the above change.

However, cloudconnector-cgi.log still keeps updating every minute, suggesting the system is always writing to this file. An example of what is added to the file:

Code: Select all

[2017-05-23 20:04:40,982][JOB:ENGINE][PID:8638][TNAME:Dummy-133][DEBUG  ][cgi][base.py:do_command:341] : cmd= sys_logs, 
[2017-05-23 20:04:41,055][JOB:ENGINE][PID:8638][TNAME:Dummy-133][DEBUG  ][cgi][cgid.py:_execute_cmd:74] : cgid RESP= {
    "message": "", 
    "error_code": 0, 
    "result": {
        "count": 27, 
        "list": 
Then it lists what appear to be all my Sync jobs and their settings. It keeps doing this over and over again, adding to the file even though no sync jobs are running.
wynrod
New here
Posts: 4
Joined: Thu Oct 11, 2012 11:35 pm

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by wynrod »

JimSwe wrote:
fbouchez wrote:Hi,
Had a similar problem.

Even if there was a sync job at a specific time, Cloud Drive Sync was frequently writing logs in :
/share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/log/0/ENGINE/cloudconnector-debug.log

I disabled the log writing by editing /share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/conf/cc-log.conf
by commenting those line :

#[handler_filehandler_debug]
#class=qnap.common.debuglogutils.ModuleRotatingFileHandler
#level=DEBUG
#formatter=long
# backupCount=0 will results rollover never occurs, bug?
#args=('$BASE$/log/$USER_ID$/$JOB_ID$/cloudconnector-debug.log', 'a', 1500000, 1, '^(qnap.*)|(cc)$')

You have to reboot the nas to make those changes applied.

HDD now goes in standby

Fabien
http://www.smartpositive.com
Thanks, I'll give that a try. Cloud Sync is one of the few services I didn't try to disable, because I need it to run every night so any changes are synced to my Dropbox the next morning.

Seems like a mistake by QNAP to leave debug logging on.

EDIT

OK So I didn't have that folder, however I did have a cc-log.conf file under /share/CACHEDEV1_DATA/.qpkg/CloudDriveSync/conf, with that exact set of lines.
I tried to comment them out and rebooted, but this resulted in the Cloud Drive Sync GUI not working (endless "Loading, Please Wait").

Instead of commenting out the section, I changed level=DEBUG to level=INFO, and this didn't break the Sync GUI. However I'm not sure that's enough to fix the HDD spin down problem.

I do have a bunch of LOG files under /share/CACHEDEV1_DATA/.qpkg/CloudDriveSync/log, and I noticed the "Last Modified" time stamp stopped updating every other minute on cloudconnector-debug.log after making the above change.

However, cloudconnector-cgi.log still keeps updating every minute, suggesting the system is always writing to this file. An example of what is added to the file:

Code: Select all

[2017-05-23 20:04:40,982][JOB:ENGINE][PID:8638][TNAME:Dummy-133][DEBUG  ][cgi][base.py:do_command:341] : cmd= sys_logs, 
[2017-05-23 20:04:41,055][JOB:ENGINE][PID:8638][TNAME:Dummy-133][DEBUG  ][cgi][cgid.py:_execute_cmd:74] : cgid RESP= {
    "message": "", 
    "error_code": 0, 
    "result": {
        "count": 27, 
        "list": 
Then it lists what appear to be all my Sync jobs and their settings. It keeps doing this over and over again, adding to the file even though no sync jobs are running.
I found that commenting all the lines of code mentioned originally did the trick in terms of achieving spin-down but it knocked-out my OneDrive account in the process (both the account and job I had created disappeared).

When I removed the comments, the OneDrive account/job returned so I tried only changing the 'level' entry to 'INFO' however that had the same effect - bye bye OneDrive .

I'll raise a ticket just to get it on record at QNAP, I'm still hopeful that a new release of HybridBackup will be released soon as they've been coming out almost monthly recently, with the exception of this month.
JimSwe
Starting out
Posts: 14
Joined: Sun Nov 13, 2016 6:22 pm

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by JimSwe »

wynrod wrote:
JimSwe wrote:
fbouchez wrote:Hi,
Had a similar problem.

Even if there was a sync job at a specific time, Cloud Drive Sync was frequently writing logs in :
/share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/log/0/ENGINE/cloudconnector-debug.log

I disabled the log writing by editing /share/CE_CACHEDEV1_DATA/.qpkg/HD_Station/share/CE_CACHEDEV1_DATA/.qpkg/CloudBackupSync/sync/conf/cc-log.conf
by commenting those line :

#[handler_filehandler_debug]
#class=qnap.common.debuglogutils.ModuleRotatingFileHandler
#level=DEBUG
#formatter=long
# backupCount=0 will results rollover never occurs, bug?
#args=('$BASE$/log/$USER_ID$/$JOB_ID$/cloudconnector-debug.log', 'a', 1500000, 1, '^(qnap.*)|(cc)$')

You have to reboot the nas to make those changes applied.

HDD now goes in standby

Fabien
http://www.smartpositive.com
Thanks, I'll give that a try. Cloud Sync is one of the few services I didn't try to disable, because I need it to run every night so any changes are synced to my Dropbox the next morning.

Seems like a mistake by QNAP to leave debug logging on.

EDIT

OK So I didn't have that folder, however I did have a cc-log.conf file under /share/CACHEDEV1_DATA/.qpkg/CloudDriveSync/conf, with that exact set of lines.
I tried to comment them out and rebooted, but this resulted in the Cloud Drive Sync GUI not working (endless "Loading, Please Wait").

Instead of commenting out the section, I changed level=DEBUG to level=INFO, and this didn't break the Sync GUI. However I'm not sure that's enough to fix the HDD spin down problem.

I do have a bunch of LOG files under /share/CACHEDEV1_DATA/.qpkg/CloudDriveSync/log, and I noticed the "Last Modified" time stamp stopped updating every other minute on cloudconnector-debug.log after making the above change.

However, cloudconnector-cgi.log still keeps updating every minute, suggesting the system is always writing to this file. An example of what is added to the file:

Code: Select all

[2017-05-23 20:04:40,982][JOB:ENGINE][PID:8638][TNAME:Dummy-133][DEBUG  ][cgi][base.py:do_command:341] : cmd= sys_logs, 
[2017-05-23 20:04:41,055][JOB:ENGINE][PID:8638][TNAME:Dummy-133][DEBUG  ][cgi][cgid.py:_execute_cmd:74] : cgid RESP= {
    "message": "", 
    "error_code": 0, 
    "result": {
        "count": 27, 
        "list": 
Then it lists what appear to be all my Sync jobs and their settings. It keeps doing this over and over again, adding to the file even though no sync jobs are running.
I found that commenting all the lines of code mentioned originally did the trick in terms of achieving spin-down but it knocked-out my OneDrive account in the process (both the account and job I had created disappeared).

When I removed the comments, the OneDrive account/job returned so I tried only changing the 'level' entry to 'INFO' however that had the same effect - bye bye OneDrive .

I'll raise a ticket just to get it on record at QNAP, I'm still hopeful that a new release of HybridBackup will be released soon as they've been coming out almost monthly recently, with the exception of this month.
The TAS-268 I'm using doesn't have Hybrid Backup, only the plain Cloud Drive Sync. But it seems to be the same thing, same config file and all. It was last updated 2017-04-25.

After changing the debug level to "INFO", the HDD did spin down one time, last night. However it has still mostly remained active. The LOG files however are only touched about twice per day now, instead of every minute. But something else seems to be keeping the HDD's awake, too.
wynrod
New here
Posts: 4
Joined: Thu Oct 11, 2012 11:35 pm

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by wynrod »

Looks like patience may have paid off, assuming this fixes the issue...

[Reply from QNAP Support]:

"Sorry for the late reply.

I just get this reply from our developers:


This issue has been verified fixed with Hybrid Backup Sync 2.1.170526, thanks.


Please wait Hybrid Backup Sync new version release."
JimSwe
Starting out
Posts: 14
Joined: Sun Nov 13, 2016 6:22 pm

Re: Any chance at all of getting HDD standby with QTS 4.3?

Post by JimSwe »

Hopefully the fix gets pushed to the TAS-268 too. Though there seems to be other things prevent spin down, too.
Even after disabling Cloud drive sync, I can only get spin down to work when the system is freshly booted. After a few hours of uptime, it's impossible. I've set a power schedule to turn the system off between 8 am and 4 pm, when I'm almost always at work anyway. But that's still inconvenient compared to working a spindown timer.
Post Reply

Return to “HDD Spin Down (HDD Standby)”