TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
-
- Getting the hang of things
- Posts: 62
- Joined: Mon May 18, 2009 9:20 pm
TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Does any one have the same problem? My TS-239 doesn't work with the new HGST NAS HDD 3T HDN724030ALE640. But the Qnap compatibility list said it is work.
- Toxic17
- Ask me anything
- Posts: 6477
- Joined: Tue Jan 25, 2011 11:41 pm
- Location: Planet Earth
- Contact:
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Firmware version and build?
Regards Simon
Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following
NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following
NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
-
- Getting the hang of things
- Posts: 62
- Joined: Mon May 18, 2009 9:20 pm
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Firmware: 5E0Toxic17 wrote:Firmware version and build?
Build: Jan 2015
- pwilson
- Guru
- Posts: 22533
- Joined: Fri Mar 06, 2009 11:20 am
- Location: Victoria, BC, Canada (UTC-08:00)
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Please access your NAS via SSH, login as "admin", and run:skykit wrote:Firmware: 5E0Toxic17 wrote:Firmware version and build?
Build: Jan 2015
Code: Select all
#!/bin/sh
rm -f /tmp/nasreport
touch /tmp/nasreport
chmod +x /tmp/nasreport
cat << EOF >> /tmp/nasreport
#!/bin/sh
#
# NAS Report by Patrick Wilson
# see: http://forum.qnap.com/viewtopic.php?f=185&t=82260#p366188
#
#
echo "*********************"
echo "** QNAP NAS Report **"
echo "*********************"
echo " "
echo "NAS Model: \$(getsysinfo model)"
echo "Firmware: \$(getcfg system version) Build \$(getcfg system 'Build Number')"
echo "System Name: \$(/bin/hostname)"
echo "Workgroup: \$(getcfg system workgroup)"
echo "Base Directory: \$(dirname \$(getcfg -f /etc/config/smb.conf Public path))"
echo "NAS IP address: \$(ifconfig \$(getcfg network 'Default GW Device') | grep addr: | awk '{ print \$2 }' | cut -d: -f2)"
echo " "
echo "Default Gateway Device: \$(getcfg network 'Default GW Device')"
echo " "
ifconfig \$(getcfg network 'Default GW Device') | grep -v HWaddr
echo " "
echo -n "DNS Nameserver(s):"
cat /etc/resolv.conf | grep nameserver | cut -d' ' -f2
echo " "
echo " "
echo "HDD Information:"
echo " "
if [ -x /sbin/hdparm ]; then
for i in {a..d}; do echo -n /dev/sd\$i ; hdparm -i /dev/sd\$i | grep "Model"; done
else
echo " /sbin/hdparm is not present"
fi
echo " "
echo "Disk Space:"
echo " "
df -h | grep -v qpkg
echo " "
echo "Mount Status:"
echo " "
mount | grep -v qpkg
echo " "
echo "RAID Status:"
echo " "
cat /proc/mdstat
echo " "
#echo "QNAP Media Scanner / Transcoder processes running: "
#echo " "
#/bin/ps | grep medialibrary | grep -v grep
#echo " "
#echo -n "MediaLibrary Configuration file: "
#ls -alF /etc/config/medialibrary.conf
#echo " "
#echo "/etc/config/medialibrary.conf:"
#cat /etc/config/medialibrary.conf
echo " "
echo "Memory Information:"
echo " "
free | grep -v cache:
echo " "
echo "NASReport completed on \$(date +'%Y-%m-%d %T') (\$0)"
EOF
sleep 2
clear
/tmp/nasreport
echo "Done."
#done
It should provide output similar to the following:
Code: Select all
*********************
** QNAP NAS Report **
*********************
NAS Model: TS-470 Pro
Firmware: 4.1.2 Build 20150126
System Name: NASTY2
Workgroup: WORKGROUP
Base Directory: /share/CACHEDEV1_DATA
NAS IP address: 10.77.13.145
Default Gateway Device: eth0
inet addr:10.77.13.145 Bcast:10.77.13.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1
RX packets:88644243 errors:0 dropped:0 overruns:0 frame:0
TX packets:82453259 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:51893895117 (49489.8 Mb) TX bytes:52302571293 (49879.6 Mb)
DNS Nameserver(s):10.77.13.1
HDD Information:
/dev/sda Model=WDC WD30EFRX-68EUZN0, FwRev=80.00A80, SerialNo=WD-WCC4N1323253
/dev/sdb Model=WDC WD30EFRX-68EUZN0, FwRev=80.00A80, SerialNo=WD-WCC4N1348887
/dev/sdc Model=WDC WD30EFRX-68EUZN0, FwRev=80.00A80, SerialNo=WD-WCC4N1325878
/dev/sdd Model=WDC WD30EFRX-68EUZN0, FwRev=80.00A80, SerialNo=WD-WCC4N1323034
Disk Space:
Filesystem Size Used Avail Use% Mounted on
rootfs 200M 146M 55M 73% /
none 200M 146M 55M 73% /
devtmpfs 3.9G 12K 3.9G 1% /dev
tmpfs 64M 11M 54M 18% /tmp
tmpfs 3.9G 32K 3.9G 1% /dev/shm
/dev/md9 510M 133M 377M 27% /mnt/HDA_ROOT
/dev/mapper/cachedev1
8.1T 5.5T 2.7T 68% /share/CACHEDEV1_DATA
/dev/md13 371M 290M 82M 78% /mnt/ext
tmpfs 8.0M 0 8.0M 0% /var/syslog_maildir
tmpfs 25M 8.0K 25M 1% /run
/dev/mapper/cachedev1
/dev/mapper/cachedev1
/dev/sdl1 75G 69G 6.4G 92% /share/external/DEV3301_1
/dev/sde1 466G 214G 253G 46% /share/external/DEV3302_1
/dev/mapper/cachedev1
/dev/mapper/cachedev1
/dev/mapper/cachedev1
Mount Status:
none on /new_root type tmpfs (rw,mode=0755,size=200M)
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/bus/usb type usbfs (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/mapper/cachedev1 on /share/CACHEDEV1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,acl)
/dev/md13 on /mnt/ext type ext3 (rw,data=ordered)
none on /sys/kernel/config type configfs (rw)
tmpfs on /var/syslog_maildir type tmpfs (rw,size=8M)
nfsd on /proc/fs/nfsd type nfsd (rw)
tmpfs on /run type tmpfs (rw,size=25M)
/dev/sdl1 on /share/external/DEV3301_1 type ufsd (rw,iocharset=utf8,dmask=0000,fmask=0111,force)
/dev/sde1 on /share/external/DEV3302_1 type ufsd (rw,iocharset=utf8,dmask=0000,fmask=0111,force)
RAID Status:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdd3[0] sda3[3] sdb3[2] sdc3[1]
8760934848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 2/22 pages [8KB], 65536KB chunk
md258 : active raid1 sdi2[2](S) sdh2[1] sdk2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sda2[3](S) sdb2[2](S) sdc2[1] sdd2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdd4[0] sda4[3] sdb4[2] sdc4[1]
458880 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sdd1[0] sda1[3] sdb1[2] sdc1[1]
530048 blocks super 1.0 [24/4] [UUUU____________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
Memory Information:
total used free shared buffers cached
Mem: 8069752 8002124 67628 0 5736600 697540
Swap: 1060216 113548 946668
NASReport completed on 2015-02-18 13:14:04 (/tmp/nasreport)
Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs
Please review: When you're asking a question, please include the following.
- schumaku
- Guru
- Posts: 43578
- Joined: Mon Jan 21, 2008 4:41 pm
- Location: Kloten (Zurich), Switzerland -- Skype: schumaku
- Contact:
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
This is a scratch install using a new pair of HDD?
Potentially, the HDD are pre-partitioned from the factory. If both are new, give a shoot and get rid of some bytes:
[~] # dd if=/dev/zero of=/dev/sda bs=512 count=10
[~] # dd if=/dev/zero of=/dev/sdb bs=512 count=10
Cut power, and power up the NAS again ... now you should be able to reconfigure it from scratch again (use Qfinder to discover, or check DHCP server assigned IP).
Potentially, the HDD are pre-partitioned from the factory. If both are new, give a shoot and get rid of some bytes:
[~] # dd if=/dev/zero of=/dev/sda bs=512 count=10
[~] # dd if=/dev/zero of=/dev/sdb bs=512 count=10
Cut power, and power up the NAS again ... now you should be able to reconfigure it from scratch again (use Qfinder to discover, or check DHCP server assigned IP).
Bet Simon was more behind the NAS firmware and build number/date.Toxic17 wrote:Firmware version and build?
- Toxic17
- Ask me anything
- Posts: 6477
- Joined: Tue Jan 25, 2011 11:41 pm
- Location: Planet Earth
- Contact:
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Yep that was the idea
Regards Simon
Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following
NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following
NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
-
- Getting the hang of things
- Posts: 62
- Joined: Mon May 18, 2009 9:20 pm
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
I have put this new HDD into my TS-453 is work fine. But put it back to my TS-239 pro still cannot detect the HDD. So I think is not pre partition by the factory.
- schumaku
- Guru
- Posts: 43578
- Joined: Mon Jan 21, 2008 4:41 pm
- Location: Kloten (Zurich), Switzerland -- Skype: schumaku
- Contact:
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Used the dd and plugged to the TS-239 Pro direct already?
What shows # dmesg after hot plugging to the new HDD operational TS-239 Pro (with an old HDD)?
What shows # dmesg after hot plugging to the new HDD operational TS-239 Pro (with an old HDD)?
schumaku wrote:Bet Simon was more behind the NAS firmware and build number/date.
-
- Getting the hang of things
- Posts: 62
- Joined: Mon May 18, 2009 9:20 pm
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Before install the new HDD slot 1 and 2 with old HDD
After install the NEW HDD in slot 1 and old HDD in slot 2
Code: Select all
[~] # dmesg
6>[ 837.782910] md: bind<sda1>
[ 837.827401] RAID1 conf printout:
[ 837.827412] --- wd:1 rd:2
[ 837.827423] disk 0, wo:1, o:1, dev:sda1
[ 837.827431] disk 1, wo:0, o:1, dev:sdb1
[ 837.827613] md: delaying recovery of md9 until md2 has finished (they share one or more physical units)
[ 842.988430] md: bind<sda4>
[ 843.254451] RAID1 conf printout:
[ 843.254462] --- wd:1 rd:2
[ 843.254471] disk 0, wo:1, o:1, dev:sda4
[ 843.254480] disk 1, wo:0, o:1, dev:sdb4
[ 843.254689] md: delaying recovery of md13 until md2 has finished (they share one or more physical units)
[ 846.100816] md: md2: recovery done.
[ 846.106278] md: Recovering done: md2, degraded=1
[ 846.115703] md: delaying recovery of md9 until md13 has finished (they share one or more physical units)
[ 846.115713] md: delaying recovery of md13 until md9 has finished (they share one or more physical units)
[ 846.126037] md: recovery of RAID array md9
[ 846.132093] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 846.137376] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 846.143403] md: using 128k window, over a total of 530048k.
[ 846.148912] RAID1 conf printout:
[ 846.148919] --- wd:2 rd:2
[ 846.148927] disk 0, wo:0, o:1, dev:sdb2
[ 846.148935] disk 1, wo:0, o:1, dev:sda2
[ 853.451093] EXT4-fs (sda3): Mount option "noacl" will be removed by 3.5
[ 853.451098] Contact linux-ext4@vger.kernel.org if you think we should keep it.
[ 853.451102]
[ 854.536037] EXT4-fs (sda3): warning: mounting fs with errors, running e2fsck is recommended
[ 854.545797] ext4_init_reserve_inode_table0: sda3, 7441
[ 854.551454] ext4_init_reserve_inode_table2: sda3, 7441, 0, 0, 4096
[ 854.557422] EXT4-fs (sda3): recovery complete
[ 854.583396] EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,nodelalloc,noacl
[ 855.976311] md: md9: recovery done.
[ 855.997847] md: recovery of RAID array md13
[ 856.003550] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 856.009722] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 856.015608] md: using 128k window, over a total of 458880k.
[ 856.318204] RAID1 conf printout:
[ 856.318213] --- wd:2 rd:2
[ 856.318221] disk 0, wo:0, o:1, dev:sda1
[ 856.318230] disk 1, wo:0, o:1, dev:sdb1
[ 882.771293] md: md13: recovery done.
[ 882.827902] RAID1 conf printout:
[ 882.827911] --- wd:2 rd:2
[ 882.827920] disk 0, wo:0, o:1, dev:sda4
[ 882.827929] disk 1, wo:0, o:1, dev:sdb4
[ 907.629881] nfsd: last server has exited, flushing export cache
[ 940.254170] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 940.260238] NFSD: starting 90-second grace period
[ 981.224217] PPP generic driver version 2.4.2
[ 981.248674] PPP MPPE Compression module registered
[ 981.259405] PPP BSD Compression module registered
[ 981.289067] PPP Deflate Compression module registered
[ 997.158420] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6608, 63 clusters in bitmap, 62 in gd
[ 997.182376] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6609, 506 clusters in bitmap, 33744 in gd
[ 997.192810] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6610, 0 clusters in bitmap, 32768 in gd
[ 997.201663] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6612, 37 clusters in bitmap, 32979 in gd
[ 997.224489] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6613, 198 clusters in bitmap, 33224 in gd
[ 997.241074] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6615, 69 clusters in bitmap, 33408 in gd
[ 997.249343] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6623, 7 clusters in bitmap, 8 in gd
[ 997.281294] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6625, 255 clusters in bitmap, 44095 in gd
[ 997.287319] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6626, 0 clusters in bitmap, 62773 in gd
[ 997.294072] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6627, 390 clusters in bitmap, 34168 in gd
[ 997.300049] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6628, 77 clusters in bitmap, 33581 in gd
[ 997.306628] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6629, 9 clusters in bitmap, 37959 in gd
[ 997.312397] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6630, 0 clusters in bitmap, 35626 in gd
[ 997.318785] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6631, 0 clusters in bitmap, 40645 in gd
[ 997.324293] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6632, 0 clusters in bitmap, 64095 in gd
[ 997.330646] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6634, 1089 clusters in bitmap, 41703 in gd
[ 997.336037] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6635, 0 clusters in bitmap, 35874 in gd
[ 997.361094] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6636, 993 clusters in bitmap, 42187 in gd
[ 997.375052] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6637, 7 clusters in bitmap, 38632 in gd
[ 997.385256] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6638, 0 clusters in bitmap, 37924 in gd
[ 997.390442] EXT4-fs error (device sda3): ext4_mb_generate_buddy:744: group 6639, 0 clusters in bitmap, 57112 in gd
[ 1000.789599] JBD2: Spotted dirty metadata buffer (dev = sda3, blocknr = 0). There's a risk of filesystem corruption in case of system crash.
[ 1000.808566] JBD2: Spotted dirty metadata buffer (dev = sda3, blocknr = 0). There's a risk of filesystem corruption in case of system crash.
[ 1155.040051] EXT4-fs (sda3): error count: 292
[ 1155.044968] EXT4-fs (sda3): initial error at 1423141444: ext4_mb_generate_buddy:744
[ 1155.050055] EXT4-fs (sda3): last error at 1424288783: ext4_mb_generate_buddy:744
[ 1298.019126] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[ 1371.673856] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[ 1393.994231] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[ 1401.175509] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[43923.680632] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[43932.734710] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[44187.705341] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[44219.842043] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[44238.493241] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[44245.047546] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[44266.479979] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[44285.385990] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm smbd: deleted inode referenced: 54132737
[84656.248130] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm vs_refresh: deleted inode referenced: 54132737
[84656.251049] EXT4-fs error (device sda3): ext4_lookup:1307: inode #13: comm vs_refresh: deleted inode referenced: 54132737
[87856.096027] EXT4-fs (sda3): error count: 306
[87856.097953] EXT4-fs (sda3): initial error at 1423141444: ext4_mb_generate_buddy:744
[87856.100056] EXT4-fs (sda3): last error at 1424372442: ext4_lookup:1307: inode 13
Code: Select all
[~] # dmesg
wo:1, o:0, dev:sda1
[135500.702785] disk 1, wo:0, o:1, dev:sdb1
[135500.705179] md/raid1:md2: redirecting sector 26688 to other mirror: sdb2
[135500.706046] RAID1 conf printout:
[135500.706053] --- wd:1 rd:2
[135500.706060] disk 1, wo:0, o:1, dev:sdb1
[135500.773240] RAID1 conf printout:
[135500.773250] --- wd:1 rd:2
[135500.773260] disk 0, wo:0, o:1, dev:sdb2
[135500.773269] disk 1, wo:1, o:0, dev:sda2
[135500.779044] RAID1 conf printout:
[135500.779055] --- wd:1 rd:2
[135500.779065] disk 0, wo:0, o:1, dev:sdb2
[135500.814848] md/raid1:md9: redirecting sector 901248 to other mirror: sdb1
[135501.182423] md/raid1:md13: sda4: rescheduling sector 720896
[135501.235114] md/raid1:md13: Disk failure on sda4, disabling device.
[135501.235120] md/raid1:md13: Operation continuing on 1 devices.
[135501.242029] md/raid1:md13: redirecting sector 720896 to other mirror: sdb4
[135501.530160] RAID1 conf printout:
[135501.530171] --- wd:1 rd:2
[135501.530180] disk 0, wo:1, o:0, dev:sda4
[135501.530189] disk 1, wo:0, o:1, dev:sdb4
[135501.534025] RAID1 conf printout:
[135501.534033] --- wd:1 rd:2
[135501.534041] disk 1, wo:0, o:1, dev:sdb4
[135502.272178] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135502.279110] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135502.284233] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135502.292513] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135502.296912] EXT4-fs error (device sda3) in ext4_orphan_add:2330: IO failure
[135502.301684] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135502.311196] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135502.316525] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135502.327399] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135502.333348] EXT4-fs error (device sda3) in ext4_orphan_add:2330: IO failure
[135502.339634] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135502.352144] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135502.359216] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #23: block 1058: comm smbstatus: unable to read itable block
[135502.372986] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135502.381451] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #25: block 1058: comm smbstatus: unable to read itable block
[135502.396089] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135502.991425] EXT4-fs error (device sda3): ext4_wait_block_bitmap:417: comm smbd: Cannot read block bitmap - block_group = 2702, block_bitmap = 88080398
[135503.007143] EXT4-fs error (device sda3): ext4_discard_preallocations:3832: comm smbd: Error loading buddy information for 2702
[135503.024086] EXT4-fs error (device sda3): ext4_wait_block_bitmap:417: comm smbd: Cannot read block bitmap - block_group = 2702, block_bitmap = 88080398
[135503.042128] EXT4-fs error (device sda3): ext4_discard_preallocations:3832: comm smbd: Error loading buddy information for 2702
[135503.062199] EXT4-fs error (device sda3): ext4_wait_block_bitmap:417: comm smbd: Cannot read block bitmap - block_group = 2702, block_bitmap = 88080398
[135503.082664] EXT4-fs error (device sda3): ext4_discard_preallocations:3832: comm smbd: Error loading buddy information for 2702
[135503.160516] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.312408] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.390529] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.409172] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.426882] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.447147] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.465810] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.485066] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.504775] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.524618] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.542459] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.560796] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.578614] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.595152] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135503.611866] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm picd: reading directory lblock 0
[135504.023470] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135504.041459] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135504.050508] EXT4-fs error (device sda3) in ext4_orphan_add:2330: IO failure
[135504.059679] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135504.077618] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135504.086810] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135504.104777] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135504.117178] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135504.135454] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135504.144751] EXT4-fs error (device sda3): __ext4_get_inode_loc:3603: inode #22: block 1058: comm smbstatus: unable to read itable block
[135504.163044] EXT4-fs error (device sda3) in ext4_reserve_inode_write:4429: IO failure
[135504.172204] EXT4-fs error (device sda3) in ext4_orphan_add:2330: IO failure
[135506.976951] md: unbind<sda2>
[135506.990047] md: export_rdev(sda2)
[135508.016096] Aborting journal on device sda3-8.
[135508.025034] Buffer I/O error on device sda3, logical block 121667584
[135508.026010] lost page write due to I/O error on sda3
[135508.043021] JBD2: Error -5 detected when updating journal superblock for sda3-8.
[135510.012937] md: unbind<sda1>
[135510.024039] md: export_rdev(sda1)
[135512.420265] md: unbind<sda4>
[135512.431047] md: export_rdev(sda4)
[135512.753564] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #2: comm btd: reading directory lblock 0
[135533.622391] EXT4-fs error (device sda3): ext4_journal_start_sb:333: Detected aborted journal
[135533.629691] EXT4-fs (sda3): Remounting filesystem read-only
[135656.261226] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #1220636: comm authLogin.cgi: reading directory lblock 0
[135656.545173] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #65: comm apache_proxy: reading directory lblock 0
[135668.772754] EXT4-fs error (device sda3): ext4_find_entry:1145: inode #65: comm appRequest.cgi: reading directory lblock 0
- schumaku
- Guru
- Posts: 43578
- Joined: Mon Jan 21, 2008 4:41 pm
- Location: Kloten (Zurich), Switzerland -- Skype: schumaku
- Contact:
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
I'm somewhat confused because of these ext4-fs error messages ref. sda3 ... where the file system should be run on the md0 RAID. These messages show upon the old config already - except of the HDD is not ejected from the RAID. Don't we have here an issue carried forward with the old configuration carried forward keeping config with the old HDD?
Have tried to configure the TS-239 rom scratch with just a single or a pair of these drives?
Have tried to configure the TS-239 rom scratch with just a single or a pair of these drives?
-
- Getting the hang of things
- Posts: 62
- Joined: Mon May 18, 2009 9:20 pm
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Hi Schumaku, two HDD are configure as a single drive. No RAID setup on my TS-239 Pro. I also have this log, "[SIngle Disk Volume: Drive 1] The fie system is not clean. It is suggested that you go to [ Storage Manager]] to run "Check File System".schumaku wrote:I'm somewhat confused because of these ext4-fs error messages ref. sda3 ... where the file system should be run on the md0 RAID. These messages show upon the old config already - except of the HDD is not ejected from the RAID. Don't we have here an issue carried forward with the old configuration carried forward keeping config with the old HDD?
Have tried to configure the TS-239 rom scratch with just a single or a pair of these drives?
In the Storage Manager, the HDD 1 show in attached
You do not have the required permissions to view the files attached to this post.
- schumaku
- Guru
- Posts: 43578
- Joined: Mon Jan 21, 2008 4:41 pm
- Location: Kloten (Zurich), Switzerland -- Skype: schumaku
- Contact:
Re: TS-239 Pro doesn't work with HGST 3T HDN724030ALE640
Sorry, yes you are right. Appears the new HDD is kicked from the system RAID partitions (due to I/O errors), and fails to be accessed by as a file system in the sda3 partition. I fear there is not much more we can do here than redirect to QNAP customer service http://helpdesk.qnap.com/