"Disk Failed", then "Disk Unplugged" errors

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
bomberman447
New here
Posts: 2
Joined: Sun May 03, 2015 5:40 am

Re: "Disk Failed", then "Disk Unplugged" errors

Post by bomberman447 » Mon Sep 11, 2017 2:20 pm

MrVideo wrote:
bomberman447 wrote:I have a TS-451 with 4x WD Red 5TB and drive one keeps doing this as well. Has happened about four times now starting in July. No SMART issues on the drive (Drive 1), plug back in, rebuild then a few weeks later it happens again. I have not been using it continuously though, only really on weekends just to check for issues.

Software version and build date?

This issue has been hitting some model's drives 3 & 4. You are not fitting the pattern, as it is drive 1.

Open a QNAP helpdesk support ticket.

4.3.3.0262 Build 20170727

I see there is a firmware update available but don't want to turn it on again until I have time for it to rebuild. I have opened a ticket with the helpdesk.

whoslistening
New here
Posts: 9
Joined: Fri Nov 28, 2014 9:03 am

Re: "Disk Failed", then "Disk Unplugged" errors

Post by whoslistening » Mon Sep 11, 2017 11:06 pm

The rebuild keeps failing - the disk goes into the whole "disk unplugged-detected but inaccessible" loop every 12-15 hours. I have received no response to my ticket so far, and honestly I just don't trust QNAP with my data anymore.

I can understand a hardware failure, but this thread makes it clear that this isn't something unique to my NAS - we all seem to have the same issue across different models.

I have backups of all my data, but even so, the whole point of the NAS is to provide reliable network storage. Repeatedly marking known good disks (they are brand new and have zero SMART errors) as bad and failing at rebuilding the array isn't very reliable.

I have turned off the NAS and will buy another one that's not QNAP.

JWChris
New here
Posts: 4
Joined: Sun Sep 10, 2017 4:16 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by JWChris » Tue Sep 12, 2017 2:10 am

I still haven't received my replacement disk but it is looking increasingly unlikely that I will buy another QNAP/WD red disk combination.
I think QNAP should be reporting back on this issue to those who have raised tickets as it seems clear that it is not just a disk hardware failure issue

whoslistening
New here
Posts: 9
Joined: Fri Nov 28, 2014 9:03 am

Re: "Disk Failed", then "Disk Unplugged" errors

Post by whoslistening » Tue Sep 12, 2017 5:33 am

Final update - QNAP support responded and told me that this is a backplane issue, and would cost more than $200 + shipping to RMA as it was out of warranty (bought in 2014).

I told them not to worry, I've already bought another NAS - one that has a 3-year warranty and a reputation for quality. This is my last post here as I'm never going to buy QNAP again.

I hope everyone else gets a resolution to this issue. Good luck!

JWChris
New here
Posts: 4
Joined: Sun Sep 10, 2017 4:16 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by JWChris » Tue Sep 12, 2017 6:49 am

Given the number of people on this thread who have reported problems with WD Red disks I am starting to think that the best option for me to try and get my TS 669 Pro properly operational again is to put a different manufacturer's disk in Drive 3 to replace the one that is now reported as abnormal even though the anecdotal evidence suggests that it is the QNAP rather than the drive that is at fault.
Does anybody have recommendation for another manufacturer's 4TB disk that has proved reliable with QNAP? I have heard that Seagate are noisy and as I use the NAS mainly for listening to high res audio in the same room as the NAS that would be something to avoid?
Would Toshiba or Hitachi be a good bet?
Assuming I can get the TS 669 Pro back to using 6 disks and that QNAP continue to maintain radio silence about any problems I will then also look to get a second NAS from another manufacturer as I am no longer fully confident in QNAP as my only NAS provider.

P3R
Guru
Posts: 10981
Joined: Sat Dec 29, 2007 1:39 am
Location: Stockholm, Sweden (UTC+01:00)

Re: "Disk Failed", then "Disk Unplugged" errors

Post by P3R » Tue Sep 12, 2017 7:03 am

JWChris wrote:I have heard that Seagate are noisy and as I use the NAS mainly for listening to high res audio in the same room as the NAS that would be something to avoid?
Would Toshiba or Hitachi be a good bet?
Very unlikely. I doubt that you can find any disk as silent as the WD Red. My bet would be on Seagate Ironwolf (up to 4 TB as the larger are noisier) to be the next step up from WD Red.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!

A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.

All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!

User avatar
MrVideo
Experience counts
Posts: 4654
Joined: Fri May 03, 2013 2:26 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by MrVideo » Tue Sep 12, 2017 7:07 am

I have 8 WD Red drives in my TS-869L for quite a while now. They have not given me any issues.
QTS 4.1.n/4.2.n/4.3.n MANUAL
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
When you ask a question, please include the following
(Thanks to Toxic17 for the links)
QNAP md_checker nasreport (release 20180525)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.2.3 Build 20170213 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos

NoUser2017
First post
Posts: 1
Joined: Sun Sep 24, 2017 7:24 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by NoUser2017 » Sun Sep 24, 2017 7:32 pm

Just to chime in here, am on a TS-831X with 3 disks (WD Purple WD80PUZX) in Bay 1 through 3, and this morning Bay 1 failed. The NAS is running the latest firmware (4.3.3.0299 from September). After a few minutes, the drive replugged itself without me doing anything nor with any power cycling showing up in SMART. SMART looks good on all drives, power cycle count is similar (2-2-3). No errors shown in SMART.

RAID 5 array now rebuilding at 14%.

I did a dmesg via SSH, this is the output:

Code: Select all

[/] # dmesg
[1283329.318855] sd 3:0:0:0: [sdc]
[1283329.322167] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[1283329.327899] sd 3:0:0:0: [sdc]
[1283329.331210] Sense Key : Aborted Command [current] [descriptor]
[1283329.337233] Descriptor sense data with sense descriptors (in hex):
[1283329.343586]         72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
[1283329.350272]         00 00 00 00
[1283329.353722] sd 3:0:0:0: [sdc]
[1283329.357027] Add. Sense: No additional sense information
[1283329.362428] sd 3:0:0:0: [sdc] CDB:
[1283329.366077] Read(16): 88 00 00 00 00 02 56 2b f7 88 00 00 04 00 00 00
[1283329.372913] ata4: EH complete
[1283329.373131] md: requested-resync skipped: md1
[1283329.380758] ata4.00: detaching (SCSI 3:0:0:0)
[1283329.389854] RAID conf printout:
[1283329.389862]  --- level:5 rd:3 wd:2
[1283329.389868]  disk 0, o:0, dev:sdc3
[1283329.389873]  disk 1, o:1, dev:sdb3
[1283329.389877]  disk 2, o:1, dev:sda3
[1283329.391170] sd 3:0:0:0: [sdc] Synchronizing SCSI cache
[1283329.397018] sd 3:0:0:0: [sdc]
[1283329.400348] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[1283329.406538] sd 3:0:0:0: [sdc] Stopping disk
[1283329.410925] sd 3:0:0:0: [sdc] START_STOP FAILED
[1283329.415639] sd 3:0:0:0: [sdc]
[1283329.418947] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[1283329.431063] RAID1 conf printout:
[1283329.431069]  --- wd:2 rd:24
[1283329.431074]  disk 1, wo:0, o:1, dev:sdb1
[1283329.431079]  disk 2, wo:0, o:1, dev:sda1
[1283329.431309] RAID conf printout:
[1283329.431315]  --- level:5 rd:3 wd:2
[1283329.431321]  disk 1, o:1, dev:sdb3
[1283329.431325]  disk 2, o:1, dev:sda3
[1283332.322876] md: unbind<sdc1>
[1283332.361075] md: export_rdev(sdc1)
[1283334.417516] md/raid1:md13: Disk failure on sdc4, disabling device.
[1283334.417516] md/raid1:md13: Operation continuing on 2 devices.
[1283334.442644] RAID1 conf printout:
[1283334.442650]  --- wd:2 rd:24
[1283334.442655]  disk 0, wo:1, o:0, dev:sdc4
[1283334.442659]  disk 1, wo:0, o:1, dev:sdb4
[1283334.442663]  disk 2, wo:0, o:1, dev:sda4
[1283334.491047] RAID1 conf printout:
[1283334.491053]  --- wd:2 rd:24
[1283334.491057]  disk 1, wo:0, o:1, dev:sdb4
[1283334.491062]  disk 2, wo:0, o:1, dev:sda4
[1283334.515570] md: unbind<sdc4>
[1283334.581035] md: export_rdev(sdc4)
[1283336.637446] md/raid1:md256: Disk failure on sdc2, disabling device.
[1283336.637446] md/raid1:md256: Operation continuing on 1 devices.
[1283336.664196] RAID1 conf printout:
[1283336.664201]  --- wd:1 rd:2
[1283336.664206]  disk 0, wo:1, o:0, dev:sdc2
[1283336.664210]  disk 1, wo:0, o:1, dev:sdb2
[1283336.701076] RAID1 conf printout:
[1283336.701082]  --- wd:1 rd:2
[1283336.701087]  disk 1, wo:0, o:1, dev:sdb2
[1283336.701098] RAID1 conf printout:
[1283336.701102]  --- wd:1 rd:2
[1283336.701106]  disk 0, wo:1, o:1, dev:sda2
[1283336.701110]  disk 1, wo:0, o:1, dev:sdb2
[1283336.701268] md: recovery of RAID array md256
[1283336.705703] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283336.711706] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283336.721432] md: Recovering started: md256
[1283336.724503] md: unbind<sdc2>
[1283336.728653] md: using 1024k window, over a total of 530112k.
[1283336.781115] md: export_rdev(sdc2)
[1283339.976246] md: unbind<sdc3>
[1283340.011078] md: export_rdev(sdc3)
[1283343.084976] md: md256: recovery done.
[1283343.088832] md: Recovering done: md256, degraded=1
[1283343.126956] RAID1 conf printout:
[1283343.126962]  --- wd:2 rd:2
[1283343.126967]  disk 0, wo:0, o:1, dev:sda2
[1283343.126971]  disk 1, wo:0, o:1, dev:sdb2
[1283359.161196] ata4: exception Emask 0x10 SAct 0x0 SErr 0x4050200 action 0xe frozen
[1283359.168755] ata4: irq_stat 0x00400000, PHY RDY changed
[1283359.174067] ata4: SError: { Persist PHYRdyChg CommWake DevExch }
[1283359.180245] ata4: hard resetting link
[1283359.941064] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 330)
[1283360.193144] ata4.00: ATA-9: WDC WD80PUZX-64NEAY0, 80.H0A80, max UDMA/133
[1283360.200014] ata4.00: 15628053168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[1283360.227042] ata4.00: configured for UDMA/133
[1283360.231498] ata4: EH complete
[1283360.234787] scsi 3:0:0:0: Direct-Access     WDC      WD80PUZX-64NEAY0 80.H PQ: 0 ANSI: 5
[1283360.243049] ata4.00: set queue depth = 31
[1283360.247474] sd 3:0:0:0: [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.27 TiB)
[1283360.248413] sd 3:0:0:0: Attached scsi generic sg2 type 0
[1283360.260953] sd 3:0:0:0: [sdc] 4096-byte physical blocks
[1283360.266498] sd 3:0:0:0: [sdc] Write Protect is off
[1283360.271468] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[1283360.271540] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[1283360.375773]  sdc: sdc1 sdc2 sdc3 sdc4 sdc5
[1283360.382146] sd 3:0:0:0: [sdc] Attached SCSI disk
[1283361.767018] md: bind<sdc1>
[1283361.775304] RAID1 conf printout:
[1283361.775309]  --- wd:2 rd:24
[1283361.775314]  disk 0, wo:1, o:1, dev:sdc1
[1283361.775318]  disk 1, wo:0, o:1, dev:sdb1
[1283361.775322]  disk 2, wo:0, o:1, dev:sda1
[1283361.775436] md: recovery of RAID array md9
[1283361.779695] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283361.785705] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283361.795429] md: Recovering started: md9
[1283361.799434] md: using 1024k window, over a total of 530048k.
[1283362.889646] md: bind<sdc4>
[1283362.964612] RAID1 conf printout:
[1283362.964619]  --- wd:2 rd:24
[1283362.964624]  disk 0, wo:1, o:1, dev:sdc4
[1283362.964628]  disk 1, wo:0, o:1, dev:sdb4
[1283362.964632]  disk 2, wo:0, o:1, dev:sda4
[1283362.964738] md: recovery of RAID array md13
[1283362.969086] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283362.975089] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283362.984816] md: Recovering started: md13
[1283362.988908] md: using 1024k window, over a total of 458880k.
[1283364.332542] md: bind<sdc2>
[1283364.348745] RAID1 conf printout:
[1283364.348751]  --- wd:2 rd:2
[1283364.348756]  disk 0, wo:0, o:1, dev:sda2
[1283364.348760]  disk 1, wo:0, o:1, dev:sdb2
[1283365.466779] md: export_rdev(sdc3)
[1283365.698505] md: bind<sdc3>
[1283365.784450] RAID conf printout:
[1283365.784456]  --- level:5 rd:3 wd:2
[1283365.784461]  disk 0, o:1, dev:sdc3
[1283365.784465]  disk 1, o:1, dev:sdb3
[1283365.784469]  disk 2, o:1, dev:sda3
[1283365.791124] md: delaying recovery of md1 until md13 has finished (they share one or more physical units)
[1283378.887401] md: md13: recovery done.
[1283378.891159] md: Recovering done: md13, degraded=22
[1283378.896843] md: delaying recovery of md1 until md9 has finished (they share one or more physical units)
[1283379.005069] RAID1 conf printout:
[1283379.005075]  --- wd:3 rd:24
[1283379.005080]  disk 0, wo:0, o:1, dev:sdc4
[1283379.005084]  disk 1, wo:0, o:1, dev:sdb4
[1283379.005087]  disk 2, wo:0, o:1, dev:sda4
[1283379.960976] md: md9: recovery done.
[1283379.964651] md: Recovering done: md9, degraded=22
[1283379.973640] md: recovery of RAID array md1
[1283379.977903] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283379.983915] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283379.993639] md: Recovering started: md1
[1283379.997644] md: using 1024k window, over a total of 7804071424k.
[1283380.060831] RAID1 conf printout:
[1283380.060837]  --- wd:3 rd:24
[1283380.060842]  disk 0, wo:0, o:1, dev:sdc1
[1283380.060846]  disk 1, wo:0, o:1, dev:sdb1
[1283380.060849]  disk 2, wo:0, o:1, dev:sda1

tvbcrsa
New here
Posts: 7
Joined: Sat Feb 18, 2017 3:29 am

Re: "Disk Failed", then "Disk Unplugged" errors

Post by tvbcrsa » Fri Sep 29, 2017 3:25 pm

Welcome to the club! Pull up a chair, fill out an RMA, and go find something else to do for six weeks :-P

NoUser2017 wrote:Just to chime in here, am on a TS-831X with 3 disks (WD Purple WD80PUZX) in Bay 1 through 3, and this morning Bay 1 failed. The NAS is running the latest firmware (4.3.3.0299 from September). After a few minutes, the drive replugged itself without me doing anything nor with any power cycling showing up in SMART. SMART looks good on all drives, power cycle count is similar (2-2-3). No errors shown in SMART.

RAID 5 array now rebuilding at 14%.

I did a dmesg via SSH, this is the output:

Code: Select all

[/] # dmesg
[1283329.318855] sd 3:0:0:0: [sdc]
[1283329.322167] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[1283329.327899] sd 3:0:0:0: [sdc]
[1283329.331210] Sense Key : Aborted Command [current] [descriptor]
[1283329.337233] Descriptor sense data with sense descriptors (in hex):
[1283329.343586]         72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
[1283329.350272]         00 00 00 00
[1283329.353722] sd 3:0:0:0: [sdc]
[1283329.357027] Add. Sense: No additional sense information
[1283329.362428] sd 3:0:0:0: [sdc] CDB:
[1283329.366077] Read(16): 88 00 00 00 00 02 56 2b f7 88 00 00 04 00 00 00
[1283329.372913] ata4: EH complete
[1283329.373131] md: requested-resync skipped: md1
[1283329.380758] ata4.00: detaching (SCSI 3:0:0:0)
[1283329.389854] RAID conf printout:
[1283329.389862]  --- level:5 rd:3 wd:2
[1283329.389868]  disk 0, o:0, dev:sdc3
[1283329.389873]  disk 1, o:1, dev:sdb3
[1283329.389877]  disk 2, o:1, dev:sda3
[1283329.391170] sd 3:0:0:0: [sdc] Synchronizing SCSI cache
[1283329.397018] sd 3:0:0:0: [sdc]
[1283329.400348] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[1283329.406538] sd 3:0:0:0: [sdc] Stopping disk
[1283329.410925] sd 3:0:0:0: [sdc] START_STOP FAILED
[1283329.415639] sd 3:0:0:0: [sdc]
[1283329.418947] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[1283329.431063] RAID1 conf printout:
[1283329.431069]  --- wd:2 rd:24
[1283329.431074]  disk 1, wo:0, o:1, dev:sdb1
[1283329.431079]  disk 2, wo:0, o:1, dev:sda1
[1283329.431309] RAID conf printout:
[1283329.431315]  --- level:5 rd:3 wd:2
[1283329.431321]  disk 1, o:1, dev:sdb3
[1283329.431325]  disk 2, o:1, dev:sda3
[1283332.322876] md: unbind<sdc1>
[1283332.361075] md: export_rdev(sdc1)
[1283334.417516] md/raid1:md13: Disk failure on sdc4, disabling device.
[1283334.417516] md/raid1:md13: Operation continuing on 2 devices.
[1283334.442644] RAID1 conf printout:
[1283334.442650]  --- wd:2 rd:24
[1283334.442655]  disk 0, wo:1, o:0, dev:sdc4
[1283334.442659]  disk 1, wo:0, o:1, dev:sdb4
[1283334.442663]  disk 2, wo:0, o:1, dev:sda4
[1283334.491047] RAID1 conf printout:
[1283334.491053]  --- wd:2 rd:24
[1283334.491057]  disk 1, wo:0, o:1, dev:sdb4
[1283334.491062]  disk 2, wo:0, o:1, dev:sda4
[1283334.515570] md: unbind<sdc4>
[1283334.581035] md: export_rdev(sdc4)
[1283336.637446] md/raid1:md256: Disk failure on sdc2, disabling device.
[1283336.637446] md/raid1:md256: Operation continuing on 1 devices.
[1283336.664196] RAID1 conf printout:
[1283336.664201]  --- wd:1 rd:2
[1283336.664206]  disk 0, wo:1, o:0, dev:sdc2
[1283336.664210]  disk 1, wo:0, o:1, dev:sdb2
[1283336.701076] RAID1 conf printout:
[1283336.701082]  --- wd:1 rd:2
[1283336.701087]  disk 1, wo:0, o:1, dev:sdb2
[1283336.701098] RAID1 conf printout:
[1283336.701102]  --- wd:1 rd:2
[1283336.701106]  disk 0, wo:1, o:1, dev:sda2
[1283336.701110]  disk 1, wo:0, o:1, dev:sdb2
[1283336.701268] md: recovery of RAID array md256
[1283336.705703] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283336.711706] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283336.721432] md: Recovering started: md256
[1283336.724503] md: unbind<sdc2>
[1283336.728653] md: using 1024k window, over a total of 530112k.
[1283336.781115] md: export_rdev(sdc2)
[1283339.976246] md: unbind<sdc3>
[1283340.011078] md: export_rdev(sdc3)
[1283343.084976] md: md256: recovery done.
[1283343.088832] md: Recovering done: md256, degraded=1
[1283343.126956] RAID1 conf printout:
[1283343.126962]  --- wd:2 rd:2
[1283343.126967]  disk 0, wo:0, o:1, dev:sda2
[1283343.126971]  disk 1, wo:0, o:1, dev:sdb2
[1283359.161196] ata4: exception Emask 0x10 SAct 0x0 SErr 0x4050200 action 0xe frozen
[1283359.168755] ata4: irq_stat 0x00400000, PHY RDY changed
[1283359.174067] ata4: SError: { Persist PHYRdyChg CommWake DevExch }
[1283359.180245] ata4: hard resetting link
[1283359.941064] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 330)
[1283360.193144] ata4.00: ATA-9: WDC WD80PUZX-64NEAY0, 80.H0A80, max UDMA/133
[1283360.200014] ata4.00: 15628053168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[1283360.227042] ata4.00: configured for UDMA/133
[1283360.231498] ata4: EH complete
[1283360.234787] scsi 3:0:0:0: Direct-Access     WDC      WD80PUZX-64NEAY0 80.H PQ: 0 ANSI: 5
[1283360.243049] ata4.00: set queue depth = 31
[1283360.247474] sd 3:0:0:0: [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.27 TiB)
[1283360.248413] sd 3:0:0:0: Attached scsi generic sg2 type 0
[1283360.260953] sd 3:0:0:0: [sdc] 4096-byte physical blocks
[1283360.266498] sd 3:0:0:0: [sdc] Write Protect is off
[1283360.271468] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[1283360.271540] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[1283360.375773]  sdc: sdc1 sdc2 sdc3 sdc4 sdc5
[1283360.382146] sd 3:0:0:0: [sdc] Attached SCSI disk
[1283361.767018] md: bind<sdc1>
[1283361.775304] RAID1 conf printout:
[1283361.775309]  --- wd:2 rd:24
[1283361.775314]  disk 0, wo:1, o:1, dev:sdc1
[1283361.775318]  disk 1, wo:0, o:1, dev:sdb1
[1283361.775322]  disk 2, wo:0, o:1, dev:sda1
[1283361.775436] md: recovery of RAID array md9
[1283361.779695] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283361.785705] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283361.795429] md: Recovering started: md9
[1283361.799434] md: using 1024k window, over a total of 530048k.
[1283362.889646] md: bind<sdc4>
[1283362.964612] RAID1 conf printout:
[1283362.964619]  --- wd:2 rd:24
[1283362.964624]  disk 0, wo:1, o:1, dev:sdc4
[1283362.964628]  disk 1, wo:0, o:1, dev:sdb4
[1283362.964632]  disk 2, wo:0, o:1, dev:sda4
[1283362.964738] md: recovery of RAID array md13
[1283362.969086] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283362.975089] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283362.984816] md: Recovering started: md13
[1283362.988908] md: using 1024k window, over a total of 458880k.
[1283364.332542] md: bind<sdc2>
[1283364.348745] RAID1 conf printout:
[1283364.348751]  --- wd:2 rd:2
[1283364.348756]  disk 0, wo:0, o:1, dev:sda2
[1283364.348760]  disk 1, wo:0, o:1, dev:sdb2
[1283365.466779] md: export_rdev(sdc3)
[1283365.698505] md: bind<sdc3>
[1283365.784450] RAID conf printout:
[1283365.784456]  --- level:5 rd:3 wd:2
[1283365.784461]  disk 0, o:1, dev:sdc3
[1283365.784465]  disk 1, o:1, dev:sdb3
[1283365.784469]  disk 2, o:1, dev:sda3
[1283365.791124] md: delaying recovery of md1 until md13 has finished (they share one or more physical units)
[1283378.887401] md: md13: recovery done.
[1283378.891159] md: Recovering done: md13, degraded=22
[1283378.896843] md: delaying recovery of md1 until md9 has finished (they share one or more physical units)
[1283379.005069] RAID1 conf printout:
[1283379.005075]  --- wd:3 rd:24
[1283379.005080]  disk 0, wo:0, o:1, dev:sdc4
[1283379.005084]  disk 1, wo:0, o:1, dev:sdb4
[1283379.005087]  disk 2, wo:0, o:1, dev:sda4
[1283379.960976] md: md9: recovery done.
[1283379.964651] md: Recovering done: md9, degraded=22
[1283379.973640] md: recovery of RAID array md1
[1283379.977903] md: minimum _guaranteed_  speed: 5000 KB/sec/disk.
[1283379.983915] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283379.993639] md: Recovering started: md1
[1283379.997644] md: using 1024k window, over a total of 7804071424k.
[1283380.060831] RAID1 conf printout:
[1283380.060837]  --- wd:3 rd:24
[1283380.060842]  disk 0, wo:0, o:1, dev:sdc1
[1283380.060846]  disk 1, wo:0, o:1, dev:sdb1
[1283380.060849]  disk 2, wo:0, o:1, dev:sda1

antoniovm
New here
Posts: 4
Joined: Fri Aug 25, 2017 9:54 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by antoniovm » Thu Oct 12, 2017 5:47 pm

Here's how it went to end my case: esprinet replaced the hardware as a guarantee, but claimed an additional charge (difference between the purchase price and the current price).
I believe that Italian law does not allow the payment of costs for replacement under warranty.

Have you had similar experiences?

dolbyman
Guru
Posts: 14820
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: "Disk Failed", then "Disk Unplugged" errors

Post by dolbyman » Thu Oct 12, 2017 11:02 pm

I have never heard about payment of price differences on warranty exchanges (unless they switched you to a different model..with prior consent)

Sounds like fraudulent charges to me

Ian Hill
New here
Posts: 2
Joined: Sat Oct 21, 2017 3:50 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by Ian Hill » Sat Oct 21, 2017 4:31 pm

I've had exactly the same problem happen twice in the last couple of months, I have a TS-453A with 4 x WD RED WD20EFRX drives. last night drive 1 suddenly 'disconnected' and then reconnected. I've logged a ticket via Heldesk, not sure if this is a drive issue or a QNAP issue. the drive has no SMART errors and now is set to free as the hot spare in bay 4 kicked in and the raid is currently rebuilding, below is my recent log

391 Information 21/10/2017 04:56:35 System 127.0.0.1 localhost Host: Disk 1 plugged in.
390 Error 21/10/2017 04:56:35 System 127.0.0.1 localhost Host: Disk 1 unplugged.
389 Information 21/10/2017 04:56:31 System 127.0.0.1 localhost Host: Disk 1 Device removed successfully.
388 Information 21/10/2017 04:56:23 System 127.0.0.1 localhost [Pool 1] Started rebuilding with RAID Group 1.
387 Error 21/10/2017 04:56:22 System 127.0.0.1 localhost A drive has been detected but is inaccessible. Please check it for faults.
386 Warning 21/10/2017 04:56:21 System 127.0.0.1 localhost "[RAID Group 1] Skip data migration from Drive 1 to Drive 4"
385 Error 21/10/2017 04:56:20 System 127.0.0.1 localhost [Volume DataVol1, Pool 1] Host: Disk 1 failed.
384 Warning 21/10/2017 04:56:07 System 127.0.0.1 localhost [Bad Block Log]: Invalid data found on Host: Disk 1 at sector (1347636224, 8, 1).

User avatar
MrVideo
Experience counts
Posts: 4654
Joined: Fri May 03, 2013 2:26 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by MrVideo » Sun Oct 22, 2017 3:05 am

Do not double-post. Your original post is all that you need.
QTS 4.1.n/4.2.n/4.3.n MANUAL
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
When you ask a question, please include the following
(Thanks to Toxic17 for the links)
QNAP md_checker nasreport (release 20180525)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.2.3 Build 20170213 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos

rdluu
New here
Posts: 5
Joined: Fri Oct 09, 2009 12:16 pm

Re: "Disk Failed", then "Disk Unplugged" errors

Post by rdluu » Tue Nov 14, 2017 12:54 am

The TVS-473-8G is a piece of junk. I have sent the unit back for a backboard replacement after 2 months when one day 2 of the 4 disks were not recognized. Then after I had it back up and running for a month the newly installed backboard malfunctioned. Why do I need a NAS if it malfunctioned and I lose my data? The support takes for ever to get back in touch after submitting for help. Its **. This truly is the shoddiest product I have used in a very long time. I have lost all of my data now on two separate occasions! Luckily I keep original files but who wants to spend all their waking moments setting up the NAS again and again.

QNAP should be embarrassed this is happening. They need to give us new machines!

frea1964
New here
Posts: 5
Joined: Sun Mar 24, 2013 7:45 am

Re: "Disk Failed", then "Disk Unplugged" errors

Post by frea1964 » Thu Nov 16, 2017 7:38 pm

Hello all,

I got the same errors last night :(
I have a TS-453A with 4 x Seagate Enterprise 2TB 7200RPM SATA 6Gb/s 128 MB in RAID 5 mode, firmware version is 4.3.3.0361.

Last night SMART test was OK for all four HDs but then I got "Host: Disk 2 Disable NCQ since timeout error" followed by the following errors/warnings:
1) A drive has been detected but is inaccessible. Please check it for faults
2) [RAID5 Disk Volume Host Drive: 1 2 3 4] Host: Disk 2 Device removed successfully
3) [RAID5 Disk Volume Host Drive: 1 2 3 4] RAID device in degraded mode
4) [RAID5 Disk Volume Host Drive: 1 2 3 4] Host: Disk 2 failed
5) Host: Disk 2 unplugged

Rebooting or taking the HD out and then plugging it back in did not help, same errors. I just ordered a new HD and also opened a ticket with QNAP Support...fingers crossed.
Fabio

Model: TS-453A

Post Reply

Return to “System & Disk Volume Management”