TS-212 disk2 in raid 1 replaced after failure won't rebuild

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
jjakob
First post
Posts: 1
Joined: Fri Feb 12, 2016 7:25 am

TS-212 disk2 in raid 1 replaced after failure won't rebuild

Post by jjakob » Fri Feb 12, 2016 7:28 am

ISSUE is that the second drive in raid 1 array failed and in degraded mode
second drive is in an unmounted state and will not mount on it's own or format

Can anyone help?

original drives where:
disk1: Seagate ST3000DM001-1CH1CC24 - 2794.52GB
disk2: Seagate ST3000DM001-1CH1CC24 - 2794.52GB

disk2 failed so swapped disk2 for: WDC WD3000F9YZ-09N2001.0 - 2794.52GB

I tried the following commands. always gets stuck saying
disk 2 is busy so it can't add it to the raid 1 array.

; stop all services this will speed up raid rebuild by 10 to 20X
[/] # /etc/init.d/services.sh stop

Stop qpkg service: Shutting down Download Station services: ..
[1/3] Shutting down IceStation server...
[2/3] Shutting down Ices0...
[3/3] Shutting down Icecast...
Disable Java Runtime Environment...
.
Stop service: vpn_openvpn.sh vpn_pptp.sh ldap_server.sh antivirus.sh iso_mount.sh qbox.sh qsyncman.sh rsyslog.sh snmp lunportman.sh iscsitrgt.sh twonkymedia.sh
init_iTune.sh ImRd.sh crond.sh nvrd.sh StartMediaService.sh mariadb.sh btd.sh mysqld.sh recycled.sh Qthttpd.sh atalk.sh nfs ftp.sh smb.sh versiond.sh .

; check volumns in array
[/] # cat /proc/mdstat

Personalities : [raid1] [linear] [raid0] [raid6] [raid5] [raid4]
md0 : active raid1 sdb3[2] sda3[3]
2928697556 blocks super 1.0 [2/2] [UU]

md2 : active raid1 sdb2[0] sda2[2]
530128 blocks super 1.0 [2/2] [UU]

md13 : active raid1 sdb4[1] sda4[0]
458880 blocks [2/2] [UU]
bitmap: 1/57 pages [4KB], 4KB chunk

md9 : active raid1 sdb1[1] sda1[0]
530048 blocks [2/2] [UU]
bitmap: 1/65 pages [4KB], 4KB chunk

unused devices: <none>


; check status of raid 1 array and status of drives
[/] # mdadm --detail /dev/md0

/dev/md0:
Version : 01.00.03
Creation Time : Tue Jan 17 21:51:21 2012
Raid Level : raid1
Array Size : 2928697556 (2793.02 GiB 2998.99 GB)
Used Dev Size : 2928697556 (2793.02 GiB 2998.99 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Feb 11 17:59:59 2016
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : 0
UUID : 875cbf9d:349feb8f:07d6cdc0:7bb19a8b
Events : 275569409

Number Major Minor RaidDevice State
3 8 3 0 active sync /dev/sda3
2 8 19 1 active sync /dev/sdb3


[/] # mdadm --examine /dev/sd*3

/dev/sda3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 875cbf9d:349feb8f:07d6cdc0:7bb19a8b
Name : 0
Creation Time : Tue Jan 17 21:51:21 2012
Raid Level : raid1
Raid Devices : 2

Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 5857395112 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : active
Device UUID : 9b21a64d:2d25af11:d41525cf:795c4abb

Update Time : Thu Feb 11 17:59:59 2016
Checksum : d9e6f361 - correct
Events : 275569409


Array Slot : 3 (failed, failed, 1, 0, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed)
Array State : Uu 382 failed
/dev/sdb3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 875cbf9d:349feb8f:07d6cdc0:7bb19a8b
Name : 0
Creation Time : Tue Jan 17 21:51:21 2012
Raid Level : raid1
Raid Devices : 2

Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 5857395112 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : active
Device UUID : 61e64e3c:815d6e09:324cc86a:1e03438e

Update Time : Thu Feb 11 17:59:59 2016
Checksum : 2eeacd7d - correct
Events : 275569409


Array Slot : 2 (failed, failed, 1, 0, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed
, failed, failed, failed, failed, failed)
Array State : uU 382 failed


; check the partitions
[/] # fdisk -l

Disk /dev/mtdblock0: 0 MB, 524288 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock0 doesn't contain a valid partition table

Disk /dev/mtdblock1: 2 MB, 2097152 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock1 doesn't contain a valid partition table

Disk /dev/mtdblock2: 9 MB, 9437184 bytes
255 heads, 63 sectors/track, 1 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock2 doesn't contain a valid partition table

Disk /dev/mtdblock3: 3 MB, 3145728 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock3 doesn't contain a valid partition table

Disk /dev/mtdblock4: 0 MB, 262144 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock4 doesn't contain a valid partition table

Disk /dev/mtdblock5: 1 MB, 1310720 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock5 doesn't contain a valid partition table
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sda: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)

Disk /dev/sda4: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/sda4 doesn't contain a valid partition table
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sdb: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 267350 2147483647+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md9 doesn't contain a valid partition table

Disk /dev/md2: 542 MB, 542851072 bytes
2 heads, 4 sectors/track, 132532 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md0: 0 MB, 0 bytes
2 heads, 4 sectors/track, 0 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table


;unmount the second drive (DRIVE ALREADY NOT MOUNTED)
[/] # umount /dev/sdb3

umount: /dev/sdb3: not mounted


; add disk 2 into the raid config (CAN'T ADD RESOURCE BUSY)
[/] # mdadm /dev/md0 --add /dev/sdb3

mdadm: Cannot open /dev/sdb3: Device or resource busy


; mount the raid system (ALREADY MOUNTED)
[/] # mount /dev/md0 /share/MD0_DATA -t ext4

mount: /dev/md0 already mounted or /share/MD0_DATA busy
mount: according to mtab, /dev/md0 is already mounted on /share/MD0_DATA



; kick off a repair of the raid 1 volume (ERROR)
[/] # e2fsck -b 32768 /dev/md0

e2fsck 1.42.6 (21-Sep-2012)
/dev/md0 is mounted.
e2fsck: Cannot continue, aborting.

kherr4377
Been there, done that
Posts: 893
Joined: Mon Jun 03, 2013 3:33 am

Re: TS-212 disk2 in raid 1 replaced after failure won't rebuild

Post by kherr4377 » Fri Feb 12, 2016 12:07 pm

Just looking at your drive make/model, it could be them. You have the dreaded Seagate DM drives that 100's of users here have complained about. These are desktop drives and are NOT NAS grade drives. They are not compatible. You should use drives that are on the compatibility list such as the Seagate NAS, WD Red NAS, or HGST NAS drives.
Production :
TVS-673 4.3.4 0387
4 X 3TB WD RED : 1 X 4TB HGST DESKSTAR R5
32GB
LAN-10G1SR-D, FiberHal for Cisco SFP-10G-SR
NETGEAR ProSAFE SS3300-28X

Backup :
TS-469L 4.3.4 0387
4 X 3TB WD RED R5
3GB
Located detached garage .. cheap offsite solution ...

2nd TS-469L awaiting drives and reassignment for front-line duty .......

User avatar
MrVideo
Experience counts
Posts: 4706
Joined: Fri May 03, 2013 2:26 pm

Re: TS-212 disk2 in raid 1 replaced after failure won't rebuild

Post by MrVideo » Fri Feb 12, 2016 1:04 pm

In other words, back up your data, if it isn't already backed up and then replace both drives. It means starting from scratch with your NAS.
QTS 4.1.n/4.2.n/4.3.n MANUAL
Submit QNAP Support Ticket - QNAP Tutorials, FAQs, Downloads, Wiki - Product Support Status - Moogle's QNAP FAQ help V2
When you ask a question, please include the following
(Thanks to Toxic17 for the links)
QNAP md_checker nasreport (release 20180525)
===============================
Model: TS-869L -- RAM: 3G -- FW: QTS 4.1.4 Build 20150522 (used for data storage)
WD60EFRX-68L0BN1(x1)/68MYMN1(x7) Red HDDs -- RAID6: 8x6TB -- Cold spare: 1x6TB
Entware
===============================
Model: TS-451A -- RAM: 2G -- FW: QTS 4.3.6 Build 20190531 (used as a video server)
WL3000GSA6472(x3) White label NAS HDDs -- RAID5: 3x3TB
Entware -- MyKodi 17.3 (default is Kodi 16)
===============================
My 2017 Total Solar Eclipse Photos

Post Reply

Return to “System & Disk Volume Management”