Hi
yet another spindown issue, i have been seaching the forum for the past 2 days, didn't find anything
well, there is a lot of info to be found but nothing resembles my issue
when i run blkdevMonitor is see activity on sda3 and sdb3 always the same block,
also i would like to know what the process md1_raid1 is doing ?
any clue is appreciated
i disabled most services already, disabled the crontab for the surveillance station
i have 4 TB disks hence they are automatically formatted in ext4, could it be that the issue is ext4 related
i have an old TS-259 Pro which spun down correctly with 2Tb disks, after upgrade to 4Tb (so ext4) it doesn't spin down
0 2 * * * /sbin/qfstrim
0 4 * * * /sbin/hwclock -s
0 3 * * * /sbin/vs_refresh
0 3 * * * /sbin/clean_reset_pwd
#0-59/15 * * * * /etc/init.d/nss2_dusg.sh
30 7 * * * /sbin/clean_upload_file
30 3 * * * /sbin/notice_log_tool -v -R
0 3 * * * /etc/init.d/ImRd.sh bgThGen
0 3 * * * /bin/rm -rf /mnt/HDA_ROOT/twonkymedia/twonkymedia.db/cache/*
14 8 * * * /usr/bin/qcloud_cli -c
0 3 * * 0 /etc/init.d/idmap.sh dump
10 15 * * * /usr/bin/power_clean -c 2>/dev/null
4 3 * * 3 /etc/init.d/backup_conf.sh
===== Welcome to use blkdevMonitor_v2 on Wed Nov 19 07:41:36 CET 2014 =====
Turn off/on VM block_dump & Clean dmesg
Countdown: 3 2 1
Start...
============= 0/100 test, Wed Nov 19 07:42:54 CET 2014 ===============
<7>[45361.129434] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45361.362836] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45361.129408] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[45361.362799] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
============= 1/100 test, Wed Nov 19 07:43:38 CET 2014 ===============
<7>[45481.437688] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45481.674080] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45481.437661] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[45481.674043] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
============= 2/100 test, Wed Nov 19 07:45:34 CET 2014 ===============
<7>[45601.727054] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45601.962313] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45601.727026] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[45601.962275] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
HS-251 HD spin down problem 4.1.1, ext4 related?
-
- Starting out
- Posts: 19
- Joined: Mon Nov 17, 2014 3:50 pm
HS-251 HD spin down problem 4.1.1, ext4 related?
Last edited by kdzwart on Thu Nov 20, 2014 2:28 pm, edited 7 times in total.
- pwilson
- Guru
- Posts: 22533
- Joined: Fri Mar 06, 2009 11:20 am
- Location: Victoria, BC, Canada (UTC-08:00)
Re: HS-251 HD spin down problem on non-existing sda3/sdb3
Try using the "correct" commands.....kdzwart wrote:Hi
yet another spindown issue
when i run blkdevMonitor is see activity on sda3 and sdb3 always the same block, but these devices sda3/sdb3 do not exist
also i would like to know what the process md1_raid1 is doing ?
any clue is appreciated
===== Welcome to use blkdevMonitor_v2 on Wed Nov 19 07:41:36 CET 2014 =====
Turn off/on VM block_dump & Clean dmesg
Countdown: 3 2 1
Start...
============= 0/100 test, Wed Nov 19 07:42:54 CET 2014 ===============
<7>[45361.129434] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45361.362836] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45361.129408] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[45361.362799] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
============= 1/100 test, Wed Nov 19 07:43:38 CET 2014 ===============
<7>[45481.437688] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45481.674080] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45481.437661] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[45481.674043] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
============= 2/100 test, Wed Nov 19 07:45:34 CET 2014 ===============
<7>[45601.727054] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45601.962313] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[45601.727026] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[45601.962275] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
yet when i do an fdisk -l, there is no sda3 nor sdb3
and when i run blkdevMonitor it always show the same block 7794127504
Disk /dev/sda: 4000.7 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sdb: 4000.7 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 267350 2147483647+ ee EFI GPT
Disk /dev/sdc: 515 MB, 515899392 bytes
8 heads, 32 sectors/track, 3936 cylinders
Units = cylinders of 256 * 512 = 131072 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 41 5244 83 Linux
/dev/sdc2 * 42 1922 240768 83 Linux
/dev/sdc3 1923 3803 240768 83 Linux
/dev/sdc4 3804 3936 17024 5 Extended
/dev/sdc5 3804 3868 8304 83 Linux
/dev/sdc6 3869 3936 8688 83 Linux
Code: Select all
parted /dev/sda print
parted /dev/sdb print
df -h | grep -v qpkg | grep -v grep
mount | grep -v bind | grep -v grep
cat /proc/mdstat
mdadm -D /dev/md0
mdadm -D /dev/md1
Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs
Please review: When you're asking a question, please include the following.
-
- Starting out
- Posts: 19
- Joined: Mon Nov 17, 2014 3:50 pm
Re: HS-251 HD spin down problem on non-existing sda3/sdb3
[/] # parted /dev/sda print
Model: WDC WD40EFRX-68WT0N0 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 3992GB 3991GB primary
4 3992GB 3992GB 543MB ext3 primary
5 3992GB 4001GB 8554MB linux-swap(v1) primary
[/] # parted /dev/sdb print
Model: WDC WD40EFRX-68WT0N0 (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 3992GB 3991GB primary
4 3992GB 3992GB 543MB ext3 primary
5 3992GB 4001GB 8554MB linux-swap(v1) primary
[/] # df -h | grep -v qpkg | grep -v grep
Filesystem Size Used Available Use% Mounted on
/dev/ram0 193.7M 126.3M 67.4M 65% /
devtmpfs 946.4M 8.0k 946.4M 0% /dev
tmpfs 64.0M 2.8M 61.2M 4% /tmp
tmpfs 951.4M 28.0k 951.3M 0% /dev/shm
/dev/md9 509.5M 121.4M 388.1M 24% /mnt/HDA_ROOT
/dev/mapper/cachedev1 3.6T 1.5T 2.1T 43% /share/CACHEDEV1_DATA
/dev/md13 364.2M 295.2M 69.0M 81% /mnt/ext
tmpfs 32.0M 1.7M 30.3M 5% /.eaccelerator.tmp
[/] # mount | grep -v bind | grep -v grep
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)
[/] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[0] sda3[1]
3897063616 blocks super 1.0 [2/2] [UU]
md256 : active raid1 sda2[1] sdb2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdb4[0] sda4[1]
458880 blocks super 1.0 [24/2] [UU______________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sdb1[0] sda1[1]
530048 blocks super 1.0 [24/2] [UU______________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
[/] # mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
[/] # mdadm -D /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Thu Nov 13 14:30:36 2014
Raid Level : raid1
Array Size : 3897063616 (3716.53 GiB 3990.59 GB)
Used Dev Size : 3897063616 (3716.53 GiB 3990.59 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Nov 19 10:09:27 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : 1
UUID : 817ee67a:86a52c73:eda57eda:a28a7690
Events : 16
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 3 1 active sync /dev/sda3
[/] #
Model: WDC WD40EFRX-68WT0N0 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 3992GB 3991GB primary
4 3992GB 3992GB 543MB ext3 primary
5 3992GB 4001GB 8554MB linux-swap(v1) primary
[/] # parted /dev/sdb print
Model: WDC WD40EFRX-68WT0N0 (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 3992GB 3991GB primary
4 3992GB 3992GB 543MB ext3 primary
5 3992GB 4001GB 8554MB linux-swap(v1) primary
[/] # df -h | grep -v qpkg | grep -v grep
Filesystem Size Used Available Use% Mounted on
/dev/ram0 193.7M 126.3M 67.4M 65% /
devtmpfs 946.4M 8.0k 946.4M 0% /dev
tmpfs 64.0M 2.8M 61.2M 4% /tmp
tmpfs 951.4M 28.0k 951.3M 0% /dev/shm
/dev/md9 509.5M 121.4M 388.1M 24% /mnt/HDA_ROOT
/dev/mapper/cachedev1 3.6T 1.5T 2.1T 43% /share/CACHEDEV1_DATA
/dev/md13 364.2M 295.2M 69.0M 81% /mnt/ext
tmpfs 32.0M 1.7M 30.3M 5% /.eaccelerator.tmp
[/] # mount | grep -v bind | grep -v grep
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)
[/] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[0] sda3[1]
3897063616 blocks super 1.0 [2/2] [UU]
md256 : active raid1 sda2[1] sdb2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdb4[0] sda4[1]
458880 blocks super 1.0 [24/2] [UU______________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sdb1[0] sda1[1]
530048 blocks super 1.0 [24/2] [UU______________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
[/] # mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
[/] # mdadm -D /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Thu Nov 13 14:30:36 2014
Raid Level : raid1
Array Size : 3897063616 (3716.53 GiB 3990.59 GB)
Used Dev Size : 3897063616 (3716.53 GiB 3990.59 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Nov 19 10:09:27 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : 1
UUID : 817ee67a:86a52c73:eda57eda:a28a7690
Events : 16
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 3 1 active sync /dev/sda3
[/] #
-
- Starting out
- Posts: 19
- Joined: Mon Nov 17, 2014 3:50 pm
Re: HS-251 HD spin down problem on non-existing sda3/sdb3
now 8 or so hours later it is still the same block, anybody ???
===== Welcome to use blkdevMonitor_v2 on Wed Nov 19 15:18:28 CET 2014 =====
Turn off/on VM block_dump & Clean dmesg
Countdown: 3 2 1
Start...
============= 0/100 test, Wed Nov 19 15:18:32 CET 2014 ===============
<7>[25646.741991] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[25646.741953] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
============= 1/100 test, Wed Nov 19 15:18:44 CET 2014 ===============
<7>[25729.712982] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[25729.956101] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[25729.712952] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[25729.956065] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
===== Welcome to use blkdevMonitor_v2 on Wed Nov 19 15:18:28 CET 2014 =====
Turn off/on VM block_dump & Clean dmesg
Countdown: 3 2 1
Start...
============= 0/100 test, Wed Nov 19 15:18:32 CET 2014 ===============
<7>[25646.741991] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[25646.741953] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
============= 1/100 test, Wed Nov 19 15:18:44 CET 2014 ===============
<7>[25729.712982] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[25729.956101] md1_raid1(2492): WRITE block 7794127504 on sda3 (1 sectors)
<7>[25729.712952] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
<7>[25729.956065] md1_raid1(2492): WRITE block 7794127504 on sdb3 (1 sectors)
-
- Starting out
- Posts: 11
- Joined: Tue Apr 15, 2008 5:42 am
Re: HS-251 HD spin down problem 4.1.1, ext4 related?
I've noticed the same thing. md1_raid1 is writing to a single block and it's always that same one. A different block number than yours of course, but always the same block.
-
- New here
- Posts: 4
- Joined: Fri Nov 28, 2014 7:53 am
Re: HS-251 HD spin down problem 4.1.1, ext4 related?
same problem with me, log and command results as shown in post above
- md1_raid1 process accesses the disks (sda/b3 <=> md1) always at the same sector
- md9_raid1, md13_raid1 and md256_raid1 processes do not access the disks
- vgdisplay accesses md1, md9, md13 and md256
- md1_raid1 process accesses the disks (sda/b3 <=> md1) always at the same sector
- md9_raid1, md13_raid1 and md256_raid1 processes do not access the disks
- vgdisplay accesses md1, md9, md13 and md256
<7>[ 6462.913432] md1_raid1(1938): WRITE block 5840623504 on sdb3 (1 sectors)
<7>[ 6462.913453] md1_raid1(1938): WRITE block 5840623504 on sda3 (1 sectors)
<7>[ 6656.961515] vgdisplay(29871): READ block 0 on md1 (8 sectors)
<7>[ 6656.993472] vgdisplay(29875): READ block 8 on md1 (8 sectors)
<7>[ 6656.993634] vgdisplay(29875): READ block 16 on md1 (8 sectors)
<7>[ 6656.982123] vgdisplay(29871): READ block 0 on md9 (8 sectors)
<7>[ 6656.993294] vgdisplay(29875): READ block 0 on md13 (8 sectors)
<7>[ 6656.961175] vgdisplay(29871): READ block 0 on md256 (8 sectors)
-
- New here
- Posts: 4
- Joined: Fri Nov 28, 2014 7:53 am
Re: HS-251 HD spin down problem 4.1.1, ext4 related?
Problem solved, using the following approach:
- Use blkdevMonitor script from QNAP wiki (http://wiki.qnap.com/wiki/Find_out_whic ... m_spindown) to monitor the disk access
- regular access justified that the disk can not go into standy mode
- deactivate all services (ftp, dns, web server, ...), but keep ssh running to be able to connect => still disk accesses as before => not related to the services
- deinstall all Apps => no more disk accesses and disks go into standby => find the App causing the mess
- successive re-installation of apps until the disk accesses start again. In my case the App PostgreSQL - Beta caused the accesses.
- Use blkdevMonitor script from QNAP wiki (http://wiki.qnap.com/wiki/Find_out_whic ... m_spindown) to monitor the disk access
- regular access justified that the disk can not go into standy mode
- deactivate all services (ftp, dns, web server, ...), but keep ssh running to be able to connect => still disk accesses as before => not related to the services
- deinstall all Apps => no more disk accesses and disks go into standby => find the App causing the mess
- successive re-installation of apps until the disk accesses start again. In my case the App PostgreSQL - Beta caused the accesses.
-
- New here
- Posts: 2
- Joined: Thu Feb 02, 2017 2:06 am
Re: HS-251 HD spin down problem 4.1.1, ext4 related?
Hi all,
Exact same problem as kdzwart here. On the exact same block, which is unlikely to be a coïncidence ^^
My setup is the folling: TS-431P + 2 * Seagate Seagate ST4000VN008 (4TB)
@kdzwart: did you solve the problem as XTJzMKcaOjs1 advised?
Thanks for the help,
Kesar
Exact same problem as kdzwart here. On the exact same block, which is unlikely to be a coïncidence ^^
My setup is the folling: TS-431P + 2 * Seagate Seagate ST4000VN008 (4TB)
Code: Select all
<7>[ 772.080923] md1_raid1(1994): WRITE block 7794127504 on sda3 (1 sectors)
<7>[ 772.080935] md1_raid1(1994): WRITE block 7794127504 on sdb3 (1 sectors)
Thanks for the help,
Kesar