not seeing new disk

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
stephencicero
New here
Posts: 3
Joined: Tue Oct 09, 2012 10:16 am

not seeing new disk

Post by stephencicero »

I have a ts-859 in raid 5. i had a few disks that had warnings on the smart status. I replaced bay 2 without a problem but it sent 5 into a failed status. I removed it and installed a new one and it won't recognize the disk in storage manager. thinking to was the disk I grabbed another and same thing. I put the original back in and no change. I found another thread with a command I could run nd post it here and it looks like the drive is seen. the status light is on. and I feel the drive spinning when I pull the drive out. can someone look at this pleaase.

Filesystem Size Used Available Use% Mounted on
/dev/ramdisk 151.1M 136.7M 14.4M 90% /
tmpfs 64.0M 360.0k 63.6M 1% /tmp
tmpfs 494.4M 28.0k 494.4M 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
/dev/sda4 371.0M 272.7M 98.2M 74% /mnt/ext
/dev/md9 509.5M 123.6M 385.8M 24% /mnt/HDA_ROOT
/dev/md0 12.6T 1.0T 11.6T 8% /share/MD0_DATA
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active (read-only) raid5 sdb3[8] sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4](F) sdd3[3] sdc3[2]
13663619200 blocks level 5, 64k chunk, algorithm 2 [8/6] [U_UU_UUU]
bitmap: 1/233 pages [4KB], 4096KB chunk

md8 : active raid1 sdb2[2](S) sdh2[8] sdg2[7](S) sdf2[6](S) sdd2[4](S) sdc2[3](S) sda2[0]
530128 blocks super 1.0 [2/2] [UU]

md13 : active raid1 sdb4[1] sda4[0] sdh4[7] sdg4[6] sdf4[5] sdd4[3] sdc4[2]
458880 blocks [8/7] [UUUU_UUU]
bitmap: 6/57 pages [24KB], 4KB chunk

md9 : active raid1 sdb1[1] sda1[0] sdh1[7] sdg1[6] sdf1[4] sdd1[3] sdc1[2]
530048 blocks [8/7] [UUUUU_UU]
bitmap: 25/65 pages [100KB], 4KB chunk

unused devices: <none>
[~] # hdparm -i /dev/sda 2>/dev/null | grep Model
Model=WDC WD20EADS-00S2B0 , FwRev=04.05G04, SerialNo= WD-WCAVY0684149
[~] # hdparm -i /dev/sdb 2>/dev/null | grep Model
Model=WDC WD20EADS-00R6B0 , FwRev=01.00A01, SerialNo= WD-WCAVY0367253
[~] # hdparm -i /dev/sdc 2>/dev/null | grep Model
Model=WDC WD20EARS-00MVWB0 , FwRev=50.0AB50, SerialNo= WD-WMAZA0220720
[~] # hdparm -i /dev/sdd 2>/dev/null | grep Model
Model=WDC WD20EADS-00S2B0 , FwRev=04.05G04, SerialNo= WD-WCAVY0520065
[~] # dmesg
[993267.026305] sd 8:0:0:0: [sde] Sense Key : Medium Error [current]
[993267.029272] Info fld=0xd32a13c3
[993267.032152] sd 8:0:0:0: [sde] Add. Sense: No additional sense information
[993267.035113] sd 8:0:0:0: [sde] CDB: Read(10): 28 00 d3 2a 13 c0 00 00 18 00
[993267.038116] end_request: I/O error, dev sde, sector 3542750144
[993267.041032] md/raid:md0: read error not correctable (sector 3540629560 on sde3).
[993267.043965] raid5: some error occurred in a active device:4 of md0.
[993267.046890] md/raid:md0: read error not correctable (sector 3540629568 on sde3).
[993267.049956] raid5: some error occurred in a active device:4 of md0.
[993267.053457] md/raid:md0: read error not correctable (sector 3540629576 on sde3).
[993267.056659] raid5: some error occurred in a active device:4 of md0.
[993287.408931] sd 8:0:0:0: [sde] Unhandled sense code
[993287.409853] sd 8:0:0:0: [sde] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[993287.409853] sd 8:0:0:0: [sde] Sense Key : Medium Error [current]
[993287.409853] Info fld=0xd32a13c3
[993287.409853] sd 8:0:0:0: [sde] Add. Sense: No additional sense information
[993287.409853] sd 8:0:0:0: [sde] CDB: Read(10): 28 00 d3 2a 13 c0 00 00 18 00
[993287.409853] end_request: I/O error, dev sde, sector 3542750144
[993287.409853] md/raid:md0: read error not correctable (sector 3540629560 on sde3).
[993287.409853] raid5: some error occurred in a active device:4 of md0.
[993287.409853] md/raid:md0: read error not correctable (sector 3540629568 on sde3).
[993287.409853] raid5: some error occurred in a active device:4 of md0.
[993287.409853] md/raid:md0: read error not correctable (sector 3540629576 on sde3).
[993287.409853] raid5: some error occurred in a active device:4 of md0.
[993304.376572] md: md0: recovery done.
[993304.593764] md: recovery skipped: md0
[993305.195995] RAID conf printout:
[993305.196035] --- level:5 rd:8 wd:7
[993305.196043] disk 0, o:1, dev:sda3
[993305.196050] disk 1, o:1, dev:sdb3
[993305.196056] disk 2, o:1, dev:sdc3
[993305.196063] disk 3, o:1, dev:sdd3
[993305.196070] disk 4, o:1, dev:sde3
[993305.196077] disk 5, o:1, dev:sdf3
[993305.196083] disk 6, o:1, dev:sdg3
[993305.196088] disk 7, o:1, dev:sdh3
[993305.196092] RAID conf printout:
[993305.196097] --- level:5 rd:8 wd:7
[993305.196103] disk 0, o:1, dev:sda3
[993305.196109] disk 1, o:1, dev:sdb3
[993305.196116] disk 2, o:1, dev:sdc3
[993305.196123] disk 3, o:1, dev:sdd3
[993305.196129] disk 4, o:1, dev:sde3
[993305.196136] disk 5, o:1, dev:sdf3
[993305.196143] disk 6, o:1, dev:sdg3
[993305.196150] disk 7, o:1, dev:sdh3
[993305.196299] md: recovery of RAID array md0
[993305.200533] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[993305.204854] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[993305.209365] md: Recovering started: md0
[993305.213835] md: using 128k window, over a total of 1951945600k.
[993305.218317] md: resuming recovery of md0 from checkpoint.
[993357.051040] md: md_do_sync() got signal ... exiting
[993357.056928] md: recovery skipped: md0
[993357.155775] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5
[993357.155781] Contact linux-ext4@vger.kernel.org if you think we should keep it.
[993357.155784]
[993360.183880] ext4_init_reserve_inode_table0: md0, 104246
[993360.188411] ext4_init_reserve_inode_table2: md0, 104246, 0, 0, 4096
[993360.193291] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl
[1211761.803755] [0 0] Detect fake interrupts.
[1211762.881755] [0 0] Fake interrupts detection finished.
[1211764.851828] sd 8:0:0:0: [sde] Synchronizing SCSI cache
[1211764.858008] sd 8:0:0:0: [sde] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[1211765.174433] md/raid:md0: read error not correctable (sector 328697472 on sde3).
[1211765.175377] raid5: some error occurred in a active device:4 of md0.
[1211765.175377] md/raid:md0: Disk failure on sde3, disabling device.
[1211765.175377] md/raid:md0: Operation continuing on 6 devices.
[1211766.540304] md: super_written gets error=-19, uptodate=0
[1211766.541012] md/raid1:md9: Disk failure on sde1, disabling device.
[1211766.541012] md/raid1:md9: Operation continuing on 7 devices.
[1211766.622300] RAID1 conf printout:
[1211766.622311] --- wd:7 rd:8
[1211766.622318] disk 0, wo:0, o:1, dev:sda1
[1211766.622324] disk 1, wo:0, o:1, dev:sdb1
[1211766.622330] disk 2, wo:0, o:1, dev:sdc1
[1211766.622336] disk 3, wo:0, o:1, dev:sdd1
[1211766.622343] disk 4, wo:0, o:1, dev:sdf1
[1211766.622348] disk 5, wo:1, o:0, dev:sde1
[1211766.622354] disk 6, wo:0, o:1, dev:sdg1
[1211766.622360] disk 7, wo:0, o:1, dev:sdh1
[1211766.636022] RAID1 conf printout:
[1211766.636028] --- wd:7 rd:8
[1211766.636034] disk 0, wo:0, o:1, dev:sda1
[1211766.636039] disk 1, wo:0, o:1, dev:sdb1
[1211766.636044] disk 2, wo:0, o:1, dev:sdc1
[1211766.636050] disk 3, wo:0, o:1, dev:sdd1
[1211766.636055] disk 4, wo:0, o:1, dev:sdf1
[1211766.636060] disk 6, wo:0, o:1, dev:sdg1
[1211766.636066] disk 7, wo:0, o:1, dev:sdh1
[1211767.485873] md/raid1:md8: Disk failure on sde2, disabling device.
[1211767.485877] md/raid1:md8: Operation continuing on 2 devices.
[1211767.529491] RAID1 conf printout:
[1211767.529500] --- wd:2 rd:2
[1211767.529507] disk 0, wo:0, o:1, dev:sda2
[1211767.529513] disk 1, wo:0, o:1, dev:sdh2
[1211767.529518] RAID1 conf printout:
[1211767.529522] --- wd:2 rd:2
[1211767.529526] disk 0, wo:0, o:1, dev:sda2
[1211767.529532] disk 1, wo:0, o:1, dev:sdh2
[1211767.529536] RAID1 conf printout:
[1211767.529540] --- wd:2 rd:2
[1211767.529545] disk 0, wo:0, o:1, dev:sda2
[1211767.529551] disk 1, wo:0, o:1, dev:sdh2
[1211767.529555] RAID1 conf printout:
[1211767.529559] --- wd:2 rd:2
[1211767.529564] disk 0, wo:0, o:1, dev:sda2
[1211767.529569] disk 1, wo:0, o:1, dev:sdh2
[1211767.529575] RAID1 conf printout:
[1211767.529580] --- wd:2 rd:2
[1211767.529587] disk 0, wo:0, o:1, dev:sda2
[1211767.529595] disk 1, wo:0, o:1, dev:sdh2
[1211769.500324] md: unbind<sde2>
[1211769.512029] md: export_rdev(sde2)
[1211770.704279] md: super_written gets error=-19, uptodate=0
[1211770.705104] md/raid1:md13: Disk failure on sde4, disabling device.
[1211770.705104] md/raid1:md13: Operation continuing on 7 devices.
[1211770.794402] RAID1 conf printout:
[1211770.794413] --- wd:7 rd:8
[1211770.794419] disk 0, wo:0, o:1, dev:sda4
[1211770.794425] disk 1, wo:0, o:1, dev:sdb4
[1211770.794431] disk 2, wo:0, o:1, dev:sdc4
[1211770.794436] disk 3, wo:0, o:1, dev:sdd4
[1211770.794441] disk 4, wo:1, o:0, dev:sde4
[1211770.794447] disk 5, wo:0, o:1, dev:sdf4
[1211770.794452] disk 6, wo:0, o:1, dev:sdg4
[1211770.794458] disk 7, wo:0, o:1, dev:sdh4
[1211770.807019] RAID1 conf printout:
[1211770.807026] --- wd:7 rd:8
[1211770.807031] disk 0, wo:0, o:1, dev:sda4
[1211770.807036] disk 1, wo:0, o:1, dev:sdb4
[1211770.807042] disk 2, wo:0, o:1, dev:sdc4
[1211770.807047] disk 3, wo:0, o:1, dev:sdd4
[1211770.807052] disk 5, wo:0, o:1, dev:sdf4
[1211770.807057] disk 6, wo:0, o:1, dev:sdg4
[1211770.807063] disk 7, wo:0, o:1, dev:sdh4
[1211771.561387] md: unbind<sde1>
[1211771.573029] md: export_rdev(sde1)
[1211773.643637] md: unbind<sde4>
[1211773.655032] md: export_rdev(sde4)
[1211943.523218] [0 0] Detect fake interrupts.
[1211944.348292] [0 0] Fake interrupts detection finished.
[1212213.685099] [0 0] Detect fake interrupts.
[1212215.538174] [0 0] Fake interrupts detection finished.
[1212405.473559] [0 0] Detect fake interrupts.
[1212405.474540] [0 0] Fake interrupts detection finished.
[1212571.584688] [0 0] Detect fake interrupts.
[1212573.749392] [0 0] Fake interrupts detection finished.
[~] #
Last edited by stephencicero on Tue Sep 17, 2019 9:49 am, edited 1 time in total.
stephencicero
New here
Posts: 3
Joined: Tue Oct 09, 2012 10:16 am

Re: not seeing new disk

Post by stephencicero »

the above is with the original drive. here is with a brand new drive

[~] # df -h
Filesystem Size Used Available Use% Mounted on
/dev/ramdisk 151.1M 136.7M 14.4M 90% /
tmpfs 64.0M 364.0k 63.6M 1% /tmp
tmpfs 494.4M 28.0k 494.4M 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
/dev/sda4 371.0M 272.7M 98.2M 74% /mnt/ext
/dev/md9 509.5M 123.6M 385.8M 24% /mnt/HDA_ROOT
/dev/md0 12.6T 1.0T 11.6T 8% /share/MD0_DATA
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active (read-only) raid5 sdb3[8] sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4](F) sdd3[3] sdc3[2]
13663619200 blocks level 5, 64k chunk, algorithm 2 [8/6] [U_UU_UUU]
bitmap: 1/233 pages [4KB], 4096KB chunk

md8 : active raid1 sdb2[2](S) sdh2[8] sdg2[7](S) sdf2[6](S) sdd2[4](S) sdc2[3](S) sda2[0]
530128 blocks super 1.0 [2/2] [UU]

md13 : active raid1 sdb4[1] sda4[0] sdh4[7] sdg4[6] sdf4[5] sdd4[3] sdc4[2]
458880 blocks [8/7] [UUUU_UUU]
bitmap: 6/57 pages [24KB], 4KB chunk

md9 : active raid1 sdb1[1] sda1[0] sdh1[7] sdg1[6] sdf1[4] sdd1[3] sdc1[2]
530048 blocks [8/7] [UUUUU_UU]
bitmap: 26/65 pages [104KB], 4KB chunk

unused devices: <none>
[~] # hdparm -i /dev/sda 2>/dev/null | grep Model
Model=WDC WD20EADS-00S2B0 , FwRev=04.05G04, SerialNo= WD-WCAVY0684149
[~] # hdparm -i /dev/sdb 2>/dev/null | grep Model
Model=WDC WD20EADS-00R6B0 , FwRev=01.00A01, SerialNo= WD-WCAVY0367253
[~] # hdparm -i /dev/sdc 2>/dev/null | grep Model
Model=WDC WD20EARS-00MVWB0 , FwRev=50.0AB50, SerialNo= WD-WMAZA0220720
[~] # hdparm -i /dev/sdd 2>/dev/null | grep Model
Model=WDC WD20EADS-00S2B0 , FwRev=04.05G04, SerialNo= WD-WCAVY0520065
[~] # dmesg
. Sense: No additional sense information
[993267.035113] sd 8:0:0:0: [sde] CDB: Read(10): 28 00 d3 2a 13 c0 00 00 18 00
[993267.038116] end_request: I/O error, dev sde, sector 3542750144
[993267.041032] md/raid:md0: read error not correctable (sector 3540629560 on sde3).
[993267.043965] raid5: some error occurred in a active device:4 of md0.
[993267.046890] md/raid:md0: read error not correctable (sector 3540629568 on sde3).
[993267.049956] raid5: some error occurred in a active device:4 of md0.
[993267.053457] md/raid:md0: read error not correctable (sector 3540629576 on sde3).
[993267.056659] raid5: some error occurred in a active device:4 of md0.
[993287.408931] sd 8:0:0:0: [sde] Unhandled sense code
[993287.409853] sd 8:0:0:0: [sde] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[993287.409853] sd 8:0:0:0: [sde] Sense Key : Medium Error [current]
[993287.409853] Info fld=0xd32a13c3
[993287.409853] sd 8:0:0:0: [sde] Add. Sense: No additional sense information
[993287.409853] sd 8:0:0:0: [sde] CDB: Read(10): 28 00 d3 2a 13 c0 00 00 18 00
[993287.409853] end_request: I/O error, dev sde, sector 3542750144
[993287.409853] md/raid:md0: read error not correctable (sector 3540629560 on sde3).
[993287.409853] raid5: some error occurred in a active device:4 of md0.
[993287.409853] md/raid:md0: read error not correctable (sector 3540629568 on sde3).
[993287.409853] raid5: some error occurred in a active device:4 of md0.
[993287.409853] md/raid:md0: read error not correctable (sector 3540629576 on sde3).
[993287.409853] raid5: some error occurred in a active device:4 of md0.
[993304.376572] md: md0: recovery done.
[993304.593764] md: recovery skipped: md0
[993305.195995] RAID conf printout:
[993305.196035] --- level:5 rd:8 wd:7
[993305.196043] disk 0, o:1, dev:sda3
[993305.196050] disk 1, o:1, dev:sdb3
[993305.196056] disk 2, o:1, dev:sdc3
[993305.196063] disk 3, o:1, dev:sdd3
[993305.196070] disk 4, o:1, dev:sde3
[993305.196077] disk 5, o:1, dev:sdf3
[993305.196083] disk 6, o:1, dev:sdg3
[993305.196088] disk 7, o:1, dev:sdh3
[993305.196092] RAID conf printout:
[993305.196097] --- level:5 rd:8 wd:7
[993305.196103] disk 0, o:1, dev:sda3
[993305.196109] disk 1, o:1, dev:sdb3
[993305.196116] disk 2, o:1, dev:sdc3
[993305.196123] disk 3, o:1, dev:sdd3
[993305.196129] disk 4, o:1, dev:sde3
[993305.196136] disk 5, o:1, dev:sdf3
[993305.196143] disk 6, o:1, dev:sdg3
[993305.196150] disk 7, o:1, dev:sdh3
[993305.196299] md: recovery of RAID array md0
[993305.200533] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[993305.204854] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[993305.209365] md: Recovering started: md0
[993305.213835] md: using 128k window, over a total of 1951945600k.
[993305.218317] md: resuming recovery of md0 from checkpoint.
[993357.051040] md: md_do_sync() got signal ... exiting
[993357.056928] md: recovery skipped: md0
[993357.155775] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5
[993357.155781] Contact linux-ext4@vger.kernel.org if you think we should keep it.
[993357.155784]
[993360.183880] ext4_init_reserve_inode_table0: md0, 104246
[993360.188411] ext4_init_reserve_inode_table2: md0, 104246, 0, 0, 4096
[993360.193291] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,noacl
[1211761.803755] [0 0] Detect fake interrupts.
[1211762.881755] [0 0] Fake interrupts detection finished.
[1211764.851828] sd 8:0:0:0: [sde] Synchronizing SCSI cache
[1211764.858008] sd 8:0:0:0: [sde] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[1211765.174433] md/raid:md0: read error not correctable (sector 328697472 on sde3).
[1211765.175377] raid5: some error occurred in a active device:4 of md0.
[1211765.175377] md/raid:md0: Disk failure on sde3, disabling device.
[1211765.175377] md/raid:md0: Operation continuing on 6 devices.
[1211766.540304] md: super_written gets error=-19, uptodate=0
[1211766.541012] md/raid1:md9: Disk failure on sde1, disabling device.
[1211766.541012] md/raid1:md9: Operation continuing on 7 devices.
[1211766.622300] RAID1 conf printout:
[1211766.622311] --- wd:7 rd:8
[1211766.622318] disk 0, wo:0, o:1, dev:sda1
[1211766.622324] disk 1, wo:0, o:1, dev:sdb1
[1211766.622330] disk 2, wo:0, o:1, dev:sdc1
[1211766.622336] disk 3, wo:0, o:1, dev:sdd1
[1211766.622343] disk 4, wo:0, o:1, dev:sdf1
[1211766.622348] disk 5, wo:1, o:0, dev:sde1
[1211766.622354] disk 6, wo:0, o:1, dev:sdg1
[1211766.622360] disk 7, wo:0, o:1, dev:sdh1
[1211766.636022] RAID1 conf printout:
[1211766.636028] --- wd:7 rd:8
[1211766.636034] disk 0, wo:0, o:1, dev:sda1
[1211766.636039] disk 1, wo:0, o:1, dev:sdb1
[1211766.636044] disk 2, wo:0, o:1, dev:sdc1
[1211766.636050] disk 3, wo:0, o:1, dev:sdd1
[1211766.636055] disk 4, wo:0, o:1, dev:sdf1
[1211766.636060] disk 6, wo:0, o:1, dev:sdg1
[1211766.636066] disk 7, wo:0, o:1, dev:sdh1
[1211767.485873] md/raid1:md8: Disk failure on sde2, disabling device.
[1211767.485877] md/raid1:md8: Operation continuing on 2 devices.
[1211767.529491] RAID1 conf printout:
[1211767.529500] --- wd:2 rd:2
[1211767.529507] disk 0, wo:0, o:1, dev:sda2
[1211767.529513] disk 1, wo:0, o:1, dev:sdh2
[1211767.529518] RAID1 conf printout:
[1211767.529522] --- wd:2 rd:2
[1211767.529526] disk 0, wo:0, o:1, dev:sda2
[1211767.529532] disk 1, wo:0, o:1, dev:sdh2
[1211767.529536] RAID1 conf printout:
[1211767.529540] --- wd:2 rd:2
[1211767.529545] disk 0, wo:0, o:1, dev:sda2
[1211767.529551] disk 1, wo:0, o:1, dev:sdh2
[1211767.529555] RAID1 conf printout:
[1211767.529559] --- wd:2 rd:2
[1211767.529564] disk 0, wo:0, o:1, dev:sda2
[1211767.529569] disk 1, wo:0, o:1, dev:sdh2
[1211767.529575] RAID1 conf printout:
[1211767.529580] --- wd:2 rd:2
[1211767.529587] disk 0, wo:0, o:1, dev:sda2
[1211767.529595] disk 1, wo:0, o:1, dev:sdh2
[1211769.500324] md: unbind<sde2>
[1211769.512029] md: export_rdev(sde2)
[1211770.704279] md: super_written gets error=-19, uptodate=0
[1211770.705104] md/raid1:md13: Disk failure on sde4, disabling device.
[1211770.705104] md/raid1:md13: Operation continuing on 7 devices.
[1211770.794402] RAID1 conf printout:
[1211770.794413] --- wd:7 rd:8
[1211770.794419] disk 0, wo:0, o:1, dev:sda4
[1211770.794425] disk 1, wo:0, o:1, dev:sdb4
[1211770.794431] disk 2, wo:0, o:1, dev:sdc4
[1211770.794436] disk 3, wo:0, o:1, dev:sdd4
[1211770.794441] disk 4, wo:1, o:0, dev:sde4
[1211770.794447] disk 5, wo:0, o:1, dev:sdf4
[1211770.794452] disk 6, wo:0, o:1, dev:sdg4
[1211770.794458] disk 7, wo:0, o:1, dev:sdh4
[1211770.807019] RAID1 conf printout:
[1211770.807026] --- wd:7 rd:8
[1211770.807031] disk 0, wo:0, o:1, dev:sda4
[1211770.807036] disk 1, wo:0, o:1, dev:sdb4
[1211770.807042] disk 2, wo:0, o:1, dev:sdc4
[1211770.807047] disk 3, wo:0, o:1, dev:sdd4
[1211770.807052] disk 5, wo:0, o:1, dev:sdf4
[1211770.807057] disk 6, wo:0, o:1, dev:sdg4
[1211770.807063] disk 7, wo:0, o:1, dev:sdh4
[1211771.561387] md: unbind<sde1>
[1211771.573029] md: export_rdev(sde1)
[1211773.643637] md: unbind<sde4>
[1211773.655032] md: export_rdev(sde4)
[1211943.523218] [0 0] Detect fake interrupts.
[1211944.348292] [0 0] Fake interrupts detection finished.
[1212213.685099] [0 0] Detect fake interrupts.
[1212215.538174] [0 0] Fake interrupts detection finished.
[1212405.473559] [0 0] Detect fake interrupts.
[1212405.474540] [0 0] Fake interrupts detection finished.
[1212571.584688] [0 0] Detect fake interrupts.
[1212573.749392] [0 0] Fake interrupts detection finished.
[1214942.044214] [0 0] Detect fake interrupts.
[1214942.292275] [0 0] Fake interrupts detection finished.
[1215355.875670] rule type=2, num=0
[~] #
User avatar
MrVideo
Experience counts
Posts: 4742
Joined: Fri May 03, 2013 2:26 pm

Re: not seeing new disk

Post by MrVideo »

Please edit your postings and code wrap the pasted text.

If, while rebuilding was occurring you lose another disk, your RAID5 is toast, as you can only have one disk failure. You should have been using RAID6. Now you will have to rebuild your NAS from scratch and restore from backup.
QTS MANUALS
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
Asking a question, include the following
(Thanks to Toxic17)
QNAP md_checker nasreport (release 20210309)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.5.2 Build 20210202 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos | My 2019 N. Ireland Game of Thrones tour
User avatar
OneCD
Guru
Posts: 12144
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: not seeing new disk

Post by OneCD »

stephencicero wrote: Tue Sep 17, 2019 9:23 am

Code: Select all

[~] # hdparm -i /dev/sda 2>/dev/null | grep Model
 Model=WDC WD20EADS-00S2B0                     , FwRev=04.05G04, SerialNo=     WD-WCAVY0684149
[~] # hdparm -i /dev/sdb 2>/dev/null | grep Model
 Model=WDC WD20EADS-00R6B0                     , FwRev=01.00A01, SerialNo=     WD-WCAVY0367253
[~] # hdparm -i /dev/sdc 2>/dev/null | grep Model
 Model=WDC WD20EARS-00MVWB0                    , FwRev=50.0AB50, SerialNo=     WD-WMAZA0220720
[~] # hdparm -i /dev/sdd 2>/dev/null | grep Model
 Model=WDC WD20EADS-00S2B0                     , FwRev=04.05G04, SerialNo=     WD-WCAVY0520065
Additionally, your desktop HDDs (EADS and EARS) are not recommended for NAS use. Please replace them with drives on the hardware compatibility list.

ImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImage
Post Reply

Return to “System & Disk Volume Management”