Just to chime in here, am on a TS-831X with 3 disks (WD Purple WD80PUZX) in Bay 1 through 3, and this morning Bay 1 failed. The NAS is running the latest firmware (4.3.3.0299 from September). After a few minutes, the drive replugged itself without me doing anything nor with any power cycling showing up in SMART. SMART looks good on all drives, power cycle count is similar (2-2-3). No errors shown in SMART.
RAID 5 array now rebuilding at 14%.
Code: Select all
[/] # dmesg
[1283329.318855] sd 3:0:0:0: [sdc]
[1283329.322167] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[1283329.327899] sd 3:0:0:0: [sdc]
[1283329.331210] Sense Key : Aborted Command [current] [descriptor]
[1283329.337233] Descriptor sense data with sense descriptors (in hex):
[1283329.343586] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
[1283329.350272] 00 00 00 00
[1283329.353722] sd 3:0:0:0: [sdc]
[1283329.357027] Add. Sense: No additional sense information
[1283329.362428] sd 3:0:0:0: [sdc] CDB:
[1283329.366077] Read(16): 88 00 00 00 00 02 56 2b f7 88 00 00 04 00 00 00
[1283329.372913] ata4: EH complete
[1283329.373131] md: requested-resync skipped: md1
[1283329.380758] ata4.00: detaching (SCSI 3:0:0:0)
[1283329.389854] RAID conf printout:
[1283329.389862] --- level:5 rd:3 wd:2
[1283329.389868] disk 0, o:0, dev:sdc3
[1283329.389873] disk 1, o:1, dev:sdb3
[1283329.389877] disk 2, o:1, dev:sda3
[1283329.391170] sd 3:0:0:0: [sdc] Synchronizing SCSI cache
[1283329.397018] sd 3:0:0:0: [sdc]
[1283329.400348] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[1283329.406538] sd 3:0:0:0: [sdc] Stopping disk
[1283329.410925] sd 3:0:0:0: [sdc] START_STOP FAILED
[1283329.415639] sd 3:0:0:0: [sdc]
[1283329.418947] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[1283329.431063] RAID1 conf printout:
[1283329.431069] --- wd:2 rd:24
[1283329.431074] disk 1, wo:0, o:1, dev:sdb1
[1283329.431079] disk 2, wo:0, o:1, dev:sda1
[1283329.431309] RAID conf printout:
[1283329.431315] --- level:5 rd:3 wd:2
[1283329.431321] disk 1, o:1, dev:sdb3
[1283329.431325] disk 2, o:1, dev:sda3
[1283332.322876] md: unbind<sdc1>
[1283332.361075] md: export_rdev(sdc1)
[1283334.417516] md/raid1:md13: Disk failure on sdc4, disabling device.
[1283334.417516] md/raid1:md13: Operation continuing on 2 devices.
[1283334.442644] RAID1 conf printout:
[1283334.442650] --- wd:2 rd:24
[1283334.442655] disk 0, wo:1, o:0, dev:sdc4
[1283334.442659] disk 1, wo:0, o:1, dev:sdb4
[1283334.442663] disk 2, wo:0, o:1, dev:sda4
[1283334.491047] RAID1 conf printout:
[1283334.491053] --- wd:2 rd:24
[1283334.491057] disk 1, wo:0, o:1, dev:sdb4
[1283334.491062] disk 2, wo:0, o:1, dev:sda4
[1283334.515570] md: unbind<sdc4>
[1283334.581035] md: export_rdev(sdc4)
[1283336.637446] md/raid1:md256: Disk failure on sdc2, disabling device.
[1283336.637446] md/raid1:md256: Operation continuing on 1 devices.
[1283336.664196] RAID1 conf printout:
[1283336.664201] --- wd:1 rd:2
[1283336.664206] disk 0, wo:1, o:0, dev:sdc2
[1283336.664210] disk 1, wo:0, o:1, dev:sdb2
[1283336.701076] RAID1 conf printout:
[1283336.701082] --- wd:1 rd:2
[1283336.701087] disk 1, wo:0, o:1, dev:sdb2
[1283336.701098] RAID1 conf printout:
[1283336.701102] --- wd:1 rd:2
[1283336.701106] disk 0, wo:1, o:1, dev:sda2
[1283336.701110] disk 1, wo:0, o:1, dev:sdb2
[1283336.701268] md: recovery of RAID array md256
[1283336.705703] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[1283336.711706] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283336.721432] md: Recovering started: md256
[1283336.724503] md: unbind<sdc2>
[1283336.728653] md: using 1024k window, over a total of 530112k.
[1283336.781115] md: export_rdev(sdc2)
[1283339.976246] md: unbind<sdc3>
[1283340.011078] md: export_rdev(sdc3)
[1283343.084976] md: md256: recovery done.
[1283343.088832] md: Recovering done: md256, degraded=1
[1283343.126956] RAID1 conf printout:
[1283343.126962] --- wd:2 rd:2
[1283343.126967] disk 0, wo:0, o:1, dev:sda2
[1283343.126971] disk 1, wo:0, o:1, dev:sdb2
[1283359.161196] ata4: exception Emask 0x10 SAct 0x0 SErr 0x4050200 action 0xe frozen
[1283359.168755] ata4: irq_stat 0x00400000, PHY RDY changed
[1283359.174067] ata4: SError: { Persist PHYRdyChg CommWake DevExch }
[1283359.180245] ata4: hard resetting link
[1283359.941064] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 330)
[1283360.193144] ata4.00: ATA-9: WDC WD80PUZX-64NEAY0, 80.H0A80, max UDMA/133
[1283360.200014] ata4.00: 15628053168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[1283360.227042] ata4.00: configured for UDMA/133
[1283360.231498] ata4: EH complete
[1283360.234787] scsi 3:0:0:0: Direct-Access WDC WD80PUZX-64NEAY0 80.H PQ: 0 ANSI: 5
[1283360.243049] ata4.00: set queue depth = 31
[1283360.247474] sd 3:0:0:0: [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.27 TiB)
[1283360.248413] sd 3:0:0:0: Attached scsi generic sg2 type 0
[1283360.260953] sd 3:0:0:0: [sdc] 4096-byte physical blocks
[1283360.266498] sd 3:0:0:0: [sdc] Write Protect is off
[1283360.271468] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[1283360.271540] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[1283360.375773] sdc: sdc1 sdc2 sdc3 sdc4 sdc5
[1283360.382146] sd 3:0:0:0: [sdc] Attached SCSI disk
[1283361.767018] md: bind<sdc1>
[1283361.775304] RAID1 conf printout:
[1283361.775309] --- wd:2 rd:24
[1283361.775314] disk 0, wo:1, o:1, dev:sdc1
[1283361.775318] disk 1, wo:0, o:1, dev:sdb1
[1283361.775322] disk 2, wo:0, o:1, dev:sda1
[1283361.775436] md: recovery of RAID array md9
[1283361.779695] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[1283361.785705] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283361.795429] md: Recovering started: md9
[1283361.799434] md: using 1024k window, over a total of 530048k.
[1283362.889646] md: bind<sdc4>
[1283362.964612] RAID1 conf printout:
[1283362.964619] --- wd:2 rd:24
[1283362.964624] disk 0, wo:1, o:1, dev:sdc4
[1283362.964628] disk 1, wo:0, o:1, dev:sdb4
[1283362.964632] disk 2, wo:0, o:1, dev:sda4
[1283362.964738] md: recovery of RAID array md13
[1283362.969086] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[1283362.975089] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283362.984816] md: Recovering started: md13
[1283362.988908] md: using 1024k window, over a total of 458880k.
[1283364.332542] md: bind<sdc2>
[1283364.348745] RAID1 conf printout:
[1283364.348751] --- wd:2 rd:2
[1283364.348756] disk 0, wo:0, o:1, dev:sda2
[1283364.348760] disk 1, wo:0, o:1, dev:sdb2
[1283365.466779] md: export_rdev(sdc3)
[1283365.698505] md: bind<sdc3>
[1283365.784450] RAID conf printout:
[1283365.784456] --- level:5 rd:3 wd:2
[1283365.784461] disk 0, o:1, dev:sdc3
[1283365.784465] disk 1, o:1, dev:sdb3
[1283365.784469] disk 2, o:1, dev:sda3
[1283365.791124] md: delaying recovery of md1 until md13 has finished (they share one or more physical units)
[1283378.887401] md: md13: recovery done.
[1283378.891159] md: Recovering done: md13, degraded=22
[1283378.896843] md: delaying recovery of md1 until md9 has finished (they share one or more physical units)
[1283379.005069] RAID1 conf printout:
[1283379.005075] --- wd:3 rd:24
[1283379.005080] disk 0, wo:0, o:1, dev:sdc4
[1283379.005084] disk 1, wo:0, o:1, dev:sdb4
[1283379.005087] disk 2, wo:0, o:1, dev:sda4
[1283379.960976] md: md9: recovery done.
[1283379.964651] md: Recovering done: md9, degraded=22
[1283379.973640] md: recovery of RAID array md1
[1283379.977903] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[1283379.983915] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[1283379.993639] md: Recovering started: md1
[1283379.997644] md: using 1024k window, over a total of 7804071424k.
[1283380.060831] RAID1 conf printout:
[1283380.060837] --- wd:3 rd:24
[1283380.060842] disk 0, wo:0, o:1, dev:sdc1
[1283380.060846] disk 1, wo:0, o:1, dev:sdb1
[1283380.060849] disk 2, wo:0, o:1, dev:sda1