Raid 5: Cannot start dirty degraded array

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
mollet
Starting out
Posts: 21
Joined: Wed Aug 28, 2013 2:08 am

Raid 5: Cannot start dirty degraded array

Post by mollet »

Hi there,

Setup: TS-859Pro+
8 x Seagate HDD
2 x Raid 5 (md0: 0-3 and md1: 4-7)
Usage: Just for Backups of Production environment and iso storage for VMs.


The problem, this night a HDD failed of the md0 Raid 5. It was Drive 3.
No Problem for a Raid5, today i pluged in an freshly new one (the same that was inside but brandnew).
And the system started to rebuild, after a few minutes the system become very unstable, i changed nearly 5 or 6 drives now on this NAS the last 5 years but its still ok for Backup usage.
This time the system become very unstable, network shares somethimes worked, and sometimes not, after a few hours later the network to the Nas was gone, also there was no login possible trough the website.
So i tried to use the Hardware display on the NAS to reboot the Qnap but it was also not responding so i did a shutdown by holding down the button till the light go out.
After the reboot MD1 is there (its the more important volume) but MD0 is gone, dmesg says:

[ 1897.660472] md/raid:md0: not clean -- starting background reconstruction
[ 1897.662887] md/raid:md0: device sda3 operational as raid disk 0
[ 1897.665319] md/raid:md0: device sdd3 operational as raid disk 3
[ 1897.667721] md/raid:md0: device sdb3 operational as raid disk 1
[ 1897.682725] md/raid:md0: allocated 68992kB
[ 1897.685293] md/raid:md0: cannot start dirty degraded array.
[ 1897.687737] RAID conf printout:
[ 1897.687744] --- level:5 rd:4 wd:3
[ 1897.687751] disk 0, o:1, dev:sda3
[ 1897.687757] disk 1, o:1, dev:sdb3
[ 1897.687764] disk 2, o:1, dev:sdc3
[ 1897.687771] disk 3, o:1, dev:sdd3
[ 1897.696835] md/raid:md0: failed to run raid set.

All Harddisks are marked as good in the Storage Web overview.

So the pricy question, what can i do ? it seems like i need to add the fresly drive manually to the md0 and then start a reconstruction on ssh.

Thank you

PS: All this Drive failures and problems started with the update to 4.0.5 this was a horrible idea ;)
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Raid 5: Cannot start dirty degraded array

Post by pwilson »

mollet wrote:Hi there,

Setup: TS-859Pro+
8 x Seagate HDD
2 x Raid 5 (md0: 0-3 and md1: 4-7)
Usage: Just for Backups of Production environment and iso storage for VMs.


The problem, this night a HDD failed of the md0 Raid 5. It was Drive 3.
No Problem for a Raid5, today i pluged in an freshly new one (the same that was inside but brandnew).
And the system started to rebuild, after a few minutes the system become very unstable, i changed nearly 5 or 6 drives now on this NAS the last 5 years but its still ok for Backup usage.
This time the system become very unstable, network shares somethimes worked, and sometimes not, after a few hours later the network to the Nas was gone, also there was no login possible trough the website.
So i tried to use the Hardware display on the NAS to reboot the Qnap but it was also not responding so i did a shutdown by holding down the button till the light go out.
After the reboot MD1 is there (its the more important volume) but MD0 is gone, dmesg says:

[ 1897.660472] md/raid:md0: not clean -- starting background reconstruction
[ 1897.662887] md/raid:md0: device sda3 operational as raid disk 0
[ 1897.665319] md/raid:md0: device sdd3 operational as raid disk 3
[ 1897.667721] md/raid:md0: device sdb3 operational as raid disk 1
[ 1897.682725] md/raid:md0: allocated 68992kB
[ 1897.685293] md/raid:md0: cannot start dirty degraded array.
[ 1897.687737] RAID conf printout:
[ 1897.687744] --- level:5 rd:4 wd:3
[ 1897.687751] disk 0, o:1, dev:sda3
[ 1897.687757] disk 1, o:1, dev:sdb3
[ 1897.687764] disk 2, o:1, dev:sdc3
[ 1897.687771] disk 3, o:1, dev:sdd3
[ 1897.696835] md/raid:md0: failed to run raid set.

All Harddisks are marked as good in the Storage Web overview.

So the pricy question, what can i do ? it seems like i need to add the fresly drive manually to the md0 and then start a reconstruction on ssh.

Thank you

PS: All this Drive failures and problems started with the update to 4.0.5 this was a horrible idea ;)
Lousy information gathering/reporting.

Login to your NAS via SSH and execute the following commands:

Code: Select all

echo "Firmware:    $(getcfg system version) Build $(getcfg system 'Build Number')"
/sbin/hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
df -h | grep -v qpkg
mount | grep -v qpkg
cat /proc/mdstat
#done

Please cut&paste the output of these commands back to this message thread.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
mollet
Starting out
Posts: 21
Joined: Wed Aug 28, 2013 2:08 am

Re: Raid 5: Cannot start dirty degraded array

Post by mollet »

Yes sir ;)

[~] # echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build Number')"
Firmware: 4.0.5 Build 20140117
[~] # /sbin/hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F09CRV
Model=ST3000DM001-1CH166 , FwRev=CC47 , SerialNo= W1F45JK3
Model=ST3000DM001-1CH166 , FwRev=CC47 , SerialNo= W1F45QF9
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F08Z2L
[~] # df -h | grep -v qpkg
Filesystem Size Used Available Use% Mounted on
/dev/ram0 139.5M 120.3M 19.2M 86% /
tmpfs 64.0M 464.0k 63.5M 1% /tmp
/dev/sda4 364.2M 241.7M 122.5M 66% /mnt/ext
/dev/md9 509.5M 130.5M 378.9M 26% /mnt/HDA_ROOT
/dev/md1 8.1T 6.8T 1.3T 84% /share/MD1_DATA
tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp
[~] # mount | grep -v qpkg
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/md1 on /share/MD1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,acl)
nfsd on /proc/fs/nfsd type nfsd (rw)
none on /sys/kernel/config type configfs (rw)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sde3[4] sdh3[5] sdg3[2] sdf3[6]
8786092608 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdh1[7] sdg1[6] sdf1[5] sdd1[4] sde1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk

unused devices: <none>
[~] # #done
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Raid 5: Cannot start dirty degraded array

Post by pwilson »

mollet wrote:Yes sir ;)

[~] # echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build Number')"
Firmware: 4.0.5 Build 20140117
[~] # /sbin/hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F09CRV
Model=ST3000DM001-1CH166 , FwRev=CC47 , SerialNo= W1F45JK3
Model=ST3000DM001-1CH166 , FwRev=CC47 , SerialNo= W1F45QF9
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F08Z2L
[~] # df -h | grep -v qpkg
Filesystem Size Used Available Use% Mounted on
/dev/ram0 139.5M 120.3M 19.2M 86% /
tmpfs 64.0M 464.0k 63.5M 1% /tmp
/dev/sda4 364.2M 241.7M 122.5M 66% /mnt/ext
/dev/md9 509.5M 130.5M 378.9M 26% /mnt/HDA_ROOT
/dev/md1 8.1T 6.8T 1.3T 84% /share/MD1_DATA
tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp
[~] # mount | grep -v qpkg
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/md1 on /share/MD1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user_xattr,data=ordered,delalloc,acl)
nfsd on /proc/fs/nfsd type nfsd (rw)
none on /sys/kernel/config type configfs (rw)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sde3[4] sdh3[5] sdg3[2] sdf3[6]
8786092608 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdh1[7] sdg1[6] sdf1[5] sdd1[4] sde1[3] sdc1[2] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk

unused devices: <none>
[~] # #done
Looks like HDD3 (/dev/sdc) has failed. (Not a surprising fact when using incompatible drives like the Seagate ST3000DM001)
TS-859 Pro+ Incompatible Drive List wrote: Seagate - ST3000DM001 Incompatible TS-859 Pro+
  • (3TB & 4TB HDDs)
    Not applicable to TS-509 Pro. TS-639 Pro does not support >16TB disk volume.
  • This hard drive series initially passed our lab compatibility test and was included in the recommended HDD list. However, during the last few months, we have received an overwhelming number of support requests regarding this hard drive series. The high failure rate of this hard drives series has raised concerns over risks of data loss. Therefore, we have no choice but to remove this series from the recommended HDD list. For users who have already installed this series of HDD on their Turbo NAS, QNAP will continue to provide technical supports as requested.
  • The batch number, 1CH166, has passed the hard drive compatibility test
.
Hot-Swap HDD3 with a recommended replacement.

Use:

Code: Select all

/sbin/hdparm -i /dev/sdc 2>/dev/null | grep Model
if you need to verify the serial number of the failed drive.
HDD's are labeled 1-8 rather than 0-7, so make sure you pull the correct drive, you want the 3rd one from the left.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
mollet
Starting out
Posts: 21
Joined: Wed Aug 28, 2013 2:08 am

Re: Raid 5: Cannot start dirty degraded array

Post by mollet »

ok Thank you Major Wilson, what should i do after replacement, ill get a drive today, no problem.
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Raid 5: Cannot start dirty degraded array

Post by pwilson »

mollet wrote:what should i do after replacement, ill get a drive today, no problem.
Hot-Swap it as already suggested, per QNAP Tutorial: Hot-swapping the hard drives when the RAID crashes.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
mollet
Starting out
Posts: 21
Joined: Wed Aug 28, 2013 2:08 am

Re: Raid 5: Cannot start dirty degraded array

Post by mollet »

[~] # echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build N umber')"
Firmware: 4.0.7 Build 20140412
[~] # /sbin/hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F09CRV
Model=ST3000DM001-1CH166 , FwRev=CC47 , SerialNo= W1F45JK3
Model=WDC WD30EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= W D-WMC4N2250028
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F08Z2L
[~] # df -h | grep -v qpkg
Filesystem Size Used Available Use% Mounted on
/dev/ram0 139.5M 121.7M 17.7M 87% /
tmpfs 64.0M 444.0k 63.6M 1% /tmp
/dev/sda4 364.2M 242.3M 121.9M 67% /mnt/ext
/dev/md9 509.5M 131.3M 378.2M 26% /mnt/HDA_ROOT
/dev/md1 8.1T 7.0T 1.1T 87% /share/MD1_DATA
tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp
[~] # mount | grep -v qpkg
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/md1 on /share/MD1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user _xattr,data=ordered,delalloc,acl)
nfsd on /proc/fs/nfsd type nfsd (rw)
none on /sys/kernel/config type configfs (rw)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]
md1 : active raid5 sde3[4] sdh3[5] sdg3[2] sdf3[6]
8786092608 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7 ](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdc4[7] sdh4[6] sdg4[5] sdf4[4] sde4[3] sdd4[2] sdb4 [1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sde1[7] sdf1[6] sdg1[5] sdh1[4] sdd1[3] sdc1[2] sdb1[ 1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk

unused devices: <none>
[~] # #done


and now ?

i tried a few things but nothing worked...

mdadm /dev/md0 -a /dev/sdc
mdadm: cannot get array info for /dev/md0

dmesg
[ 251.075301] md/raid:md0: not clean -- starting background reconstruction
[ 251.077721] md/raid:md0: device sda3 operational as raid disk 0
[ 251.080205] md/raid:md0: device sdd3 operational as raid disk 3
[ 251.083588] md/raid:md0: device sdb3 operational as raid disk 1
[ 251.103530] md/raid:md0: allocated 68992kB
[ 251.106053] md/raid:md0: cannot start dirty degraded array.
[ 251.108627] RAID conf printout:
[ 251.108634] --- level:5 rd:4 wd:3
[ 251.108642] disk 0, o:1, dev:sda3
[ 251.108648] disk 1, o:1, dev:sdb3
[ 251.108655] disk 3, o:1, dev:sdd3
[ 251.118253] md/raid:md0: failed to run raid set.
[ 251.120612] md: pers->run() failed ...
[ 251.174247] md: md0 stopped.
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Raid 5: Cannot start dirty degraded array

Post by pwilson »

mollet wrote:[~] # echo "Firmware: $(getcfg system version) Build $(getcfg system 'Build N umber')"
Firmware: 4.0.7 Build 20140412
[~] # /sbin/hdparm -i /dev/sd[a-h] 2>/dev/null | grep Model
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F09CRV
Model=ST3000DM001-1CH166 , FwRev=CC47 , SerialNo= W1F45JK3
Model=WDC WD30EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= W D-WMC4N2250028
Model=ST3000DM001-9YN166 , FwRev=CC4C , SerialNo= S1F08Z2L
[~] # df -h | grep -v qpkg
Filesystem Size Used Available Use% Mounted on
/dev/ram0 139.5M 121.7M 17.7M 87% /
tmpfs 64.0M 444.0k 63.6M 1% /tmp
/dev/sda4 364.2M 242.3M 121.9M 67% /mnt/ext
/dev/md9 509.5M 131.3M 378.2M 26% /mnt/HDA_ROOT
/dev/md1 8.1T 7.0T 1.1T 87% /share/MD1_DATA
tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp
[~] # mount | grep -v qpkg
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
/dev/md1 on /share/MD1_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv0,user _xattr,data=ordered,delalloc,acl)
nfsd on /proc/fs/nfsd type nfsd (rw)
none on /sys/kernel/config type configfs (rw)
tmpfs on /.eaccelerator.tmp type tmpfs (rw,size=32M)
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]
md1 : active raid5 sde3[4] sdh3[5] sdg3[2] sdf3[6]
8786092608 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7 ](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdc4[7] sdh4[6] sdg4[5] sdf4[4] sde4[3] sdd4[2] sdb4 [1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sde1[7] sdf1[6] sdg1[5] sdh1[4] sdd1[3] sdc1[2] sdb1[ 1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk

unused devices: <none>
[~] # #done


and now ?

i tried a few things but nothing worked...

mdadm /dev/md0 -a /dev/sdc
mdadm: cannot get array info for /dev/md0

dmesg
[ 251.075301] md/raid:md0: not clean -- starting background reconstruction
[ 251.077721] md/raid:md0: device sda3 operational as raid disk 0
[ 251.080205] md/raid:md0: device sdd3 operational as raid disk 3
[ 251.083588] md/raid:md0: device sdb3 operational as raid disk 1
[ 251.103530] md/raid:md0: allocated 68992kB
[ 251.106053] md/raid:md0: cannot start dirty degraded array.
[ 251.108627] RAID conf printout:
[ 251.108634] --- level:5 rd:4 wd:3
[ 251.108642] disk 0, o:1, dev:sda3
[ 251.108648] disk 1, o:1, dev:sdb3
[ 251.108655] disk 3, o:1, dev:sdd3
[ 251.118253] md/raid:md0: failed to run raid set.
[ 251.120612] md: pers->run() failed ...
[ 251.174247] md: md0 stopped.
What Make/Model of drive did you Hot Swap into the HDD3 slot to replace the failed drive? A WD30EFRX ("NASware") model might be a good choice.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
mollet
Starting out
Posts: 21
Joined: Wed Aug 28, 2013 2:08 am

Re: Raid 5: Cannot start dirty degraded array

Post by mollet »

Model=WDC WD30EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= W D-WMC4N2250028
it is a NAS Drive...

Sadly you just dont understand that you are not in the right direction with your help, i know you would help and apriciate that but i think you cant...
A raid5 should even work in degreaded mode without the drive, but my qnap does not show md0 under cat /proc/mdstat

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sde3[4] sdh3[5] sdg3[2] sdf3[6]
8786092608 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdh4[6] sdg4[5] sdf4[4] sde4[3] sdd4[2] sdb4[1]
458880 blocks [8/7] [UUUUUUU_]
bitmap: 47/57 pages [188KB], 4KB chunk

md9 : active raid1 sda1[0] sde1[7] sdf1[6] sdg1[5] sdh1[4] sdd1[3] sdb1[1]
530048 blocks [8/7] [UU_UUUUU]
bitmap: 39/65 pages [156KB], 4KB chunk

unused devices: <none>

i have this in dmesg:

[ 135.011679] md/raid:md0: not clean -- starting background reconstruction
[ 135.013959] md/raid:md0: device sda3 operational as raid disk 0
[ 135.016251] md/raid:md0: device sdd3 operational as raid disk 3
[ 135.018533] md/raid:md0: device sdb3 operational as raid disk 1
[ 135.032916] md/raid:md0: allocated 68992kB
[ 135.035253] md/raid:md0: cannot start dirty degraded array.
[ 135.037702] RAID conf printout:
[ 135.037708] --- level:5 rd:4 wd:3
[ 135.037715] disk 0, o:1, dev:sda3
[ 135.037722] disk 1, o:1, dev:sdb3
[ 135.037729] disk 3, o:1, dev:sdd3
[ 135.046844] md/raid:md0: failed to run raid set.
[ 135.049082] md: pers->run() failed ...
[ 135.070091] md: md0 stopped.


need someone who can tell me how to reconsturct the raid5 i think...
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Raid 5: Cannot start dirty degraded array

Post by pwilson »

mollet wrote:Model=WDC WD30EFRX-68EUZN0 , FwRev=80.00A80, SerialNo= W D-WMC4N2250028
it is a NAS Drive...

Sadly you just dont understand that you are not in the right direction with your help, i know you would help and apriciate that but i think you cant...
A raid5 should even work in degreaded mode without the drive, but my qnap does not show md0 under cat /proc/mdstat

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sde3[4] sdh3[5] sdg3[2] sdf3[6]
8786092608 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdh4[6] sdg4[5] sdf4[4] sde4[3] sdd4[2] sdb4[1]
458880 blocks [8/7] [UUUUUUU_]
bitmap: 47/57 pages [188KB], 4KB chunk

md9 : active raid1 sda1[0] sde1[7] sdf1[6] sdg1[5] sdh1[4] sdd1[3] sdb1[1]
530048 blocks [8/7] [UU_UUUUU]
bitmap: 39/65 pages [156KB], 4KB chunk

unused devices: <none>

i have this in dmesg:

[ 135.011679] md/raid:md0: not clean -- starting background reconstruction
[ 135.013959] md/raid:md0: device sda3 operational as raid disk 0
[ 135.016251] md/raid:md0: device sdd3 operational as raid disk 3
[ 135.018533] md/raid:md0: device sdb3 operational as raid disk 1
[ 135.032916] md/raid:md0: allocated 68992kB
[ 135.035253] md/raid:md0: cannot start dirty degraded array.
[ 135.037702] RAID conf printout:
[ 135.037708] --- level:5 rd:4 wd:3
[ 135.037715] disk 0, o:1, dev:sda3
[ 135.037722] disk 1, o:1, dev:sdb3
[ 135.037729] disk 3, o:1, dev:sdd3
[ 135.046844] md/raid:md0: failed to run raid set.
[ 135.049082] md: pers->run() failed ...
[ 135.070091] md: md0 stopped.


need someone who can tell me how to reconsturct the raid5 i think...
Please provide output for:

Code: Select all

cat /proc/mdstat
fdisk -lu /dev/sdc
#done 


Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
mollet
Starting out
Posts: 21
Joined: Wed Aug 28, 2013 2:08 am

Re: Raid 5: Cannot start dirty degraded array

Post by mollet »

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : inactive sda3[0] sdd3[3] sdb3[4]
8786092668 blocks super 1.0

md1 : active raid5 sde3[4] sdh3[5] sdg3[2] sdf3[6]
8786092608 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md8 : active raid1 sdc2[2](S) sdh2[3](S) sdg2[4](S) sdf2[5](S) sde2[6](S) sdd2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]

md13 : active raid1 sdc4[7] sda4[0] sdh4[6] sdg4[5] sdf4[4] sde4[3] sdd4[2] sdb4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sdc1[2] sda1[0] sde1[7] sdf1[6] sdg1[5] sdh1[4] sdd1[3] sdb1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 2/65 pages [8KB], 4KB chunk

unused devices: <none>
[~] # fdisk -lu /dev/sdc

Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 4294967295 2147483647+ ee EFI GPT
mollet
Starting out
Posts: 21
Joined: Wed Aug 28, 2013 2:08 am

Re: Raid 5: Cannot start dirty degraded array

Post by mollet »

[~] # mdadm --assemble --scan -fv /dev/md0
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdc4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdc3 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdc2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdc1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdc is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/fbdisk0 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/md1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/md8 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sda4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/md9 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdx6 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdx5 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdx4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdx3 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdx2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdx1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdx is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdh4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdh3 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdh2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdh1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdh is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdg4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdg3 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdg2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdg1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdg is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sde4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sde3 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sde2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sde1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sde is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdf4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdf3 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdf2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdf1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdf is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdareal4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sda2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sda1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sda is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdb4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdb2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdb1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdb is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdd4 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdd2 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdd1 is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sdd is not one of /dev/sda3,/dev/sdb3,/dev/sdd3
mdadm: /dev/sda3 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb3 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd3 is identified as a member of /dev/md0, slot 3.
mdadm: added /dev/sdb3 to /dev/md0 as 1
mdadm: no uptodate device for slot 2 of /dev/md0
mdadm: added /dev/sdd3 to /dev/md0 as 3
mdadm: added /dev/sda3 to /dev/md0 as 0
mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
Post Reply

Return to “System & Disk Volume Management”