Volume in state "unmounted" - all data lost !

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
graef
New here
Posts: 6
Joined: Fri Sep 07, 2012 1:34 pm

Volume in state "unmounted" - all data lost !

Post by graef »

My volume suddenly went in state "unmounted" and the share are not accessible anymore. There is no way to mount the volume again ? All data is lost !

I just created a RAID6 volume with 16(24) TB on my QNAP TS-859Pro+ and started to copy a few TB of data to it. The NAS with its dual-core Atom CPU suffered from high load about 2 days and then suddenly the RSYNC copy jobs stopped with failure. The error message was, that the target is not reachable anymore. At this point the volume and shares were still visible in the admin UI, but not accessible via file share. Then I rebooted the NAS and the volume was shown "unmounted".

There is no functionality in the admin UI to mount the volume again. This means, that all data is lost !?!

This happened about 3 times during the last 2 years. It always happened when the NAS was exposed to high load, e.g. several simultaneous copy jobs running in parallel.

Below the output of a few SSH console commands. Maybe somebody recognizes a possible cause ?

[~] # more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid6 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
17572185216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sdb4[0] sda4[7] sdh4[6] sdg4[5] sdf4[4] sde4[3] sdd4[2] sdc4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 1/57 pages [4KB], 4KB chunk
md9 : active raid1 sdb1[0] sda1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>

[~] # mount /dev/md0 /share/MD0_DATA -t ext4
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

e2fsck_64 /dev/md0
(...)
Group descriptor 134061 checksum is invalid. FIXED.
Group descriptor 134062 checksum is invalid. FIXED.
Group descriptor 134063 checksum is invalid. FIXED.
Group descriptor 134064 checksum is invalid. FIXED.
Group descriptor 134065 checksum is invalid. FIXED.
/dev/md0 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Error allocating block bitmap (4): Memory allocation failed
e2fsck: aborted

[~] # dmesg | tail -200
[ 137.565434] md: bind<sda2>
[ 137.571124] md/raid1:md8: active with 1 out of 1 mirrors
[ 137.575348] md8: detected capacity change from 0 to 542769152
[ 138.588622] md8: unknown partition table
[ 140.642941] Adding 530044k swap on /dev/md8. Priority:-1 extents:1 across:530044k
[ 144.189898] md: bind<sdb2>
[ 144.209310] RAID1 conf printout:
[ 144.209320] --- wd:1 rd:2
[ 144.209328] disk 0, wo:0, o:1, dev:sda2
[ 144.209335] disk 1, wo:1, o:1, dev:sdb2
[ 144.209461] md: recovery of RAID array md8
[ 144.212722] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 144.216034] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 144.219456] md: using 128k window, over a total of 530048k.
[ 146.264471] md: bind<sdc2>
[ 148.324908] md: bind<sdd2>
[ 150.456198] md: bind<sde2>
[ 152.560238] md: bind<sdf2>
[ 154.625218] md: bind<sdg2>
[ 155.537838] md: md0 stopped.
[ 155.557976] md: md0 stopped.
[ 155.716476] md: bind<sdb3>
[ 155.719787] md: bind<sdc3>
[ 155.722997] md: bind<sdd3>
[ 155.726209] md: bind<sde3>
[ 155.729369] md: bind<sdf3>
[ 155.732432] md: bind<sdg3>
[ 155.735396] md: bind<sdh3>
[ 155.738202] md: bind<sda3>
[ 155.742191] md/raid:md0: device sda3 operational as raid disk 0
[ 155.744860] md/raid:md0: device sdh3 operational as raid disk 7
[ 155.747416] md/raid:md0: device sdg3 operational as raid disk 6
[ 155.749880] md/raid:md0: device sdf3 operational as raid disk 5
[ 155.752293] md/raid:md0: device sde3 operational as raid disk 4
[ 155.754653] md/raid:md0: device sdd3 operational as raid disk 3
[ 155.756951] md/raid:md0: device sdc3 operational as raid disk 2
[ 155.759252] md/raid:md0: device sdb3 operational as raid disk 1
[ 155.781976] md/raid:md0: allocated 136320kB
[ 155.784405] md/raid:md0: raid level 6 active with 8 out of 8 devices, algorithm 2
[ 155.786938] RAID conf printout:
[ 155.786943] --- level:6 rd:8 wd:8
[ 155.786949] disk 0, o:1, dev:sda3
[ 155.786954] disk 1, o:1, dev:sdb3
[ 155.786959] disk 2, o:1, dev:sdc3
[ 155.786964] disk 3, o:1, dev:sdd3
[ 155.786969] disk 4, o:1, dev:sde3
[ 155.786973] disk 5, o:1, dev:sdf3
[ 155.786978] disk 6, o:1, dev:sdg3
[ 155.786983] disk 7, o:1, dev:sdh3
[ 155.787080] md0: detected capacity change from 0 to 17993917661184
[ 156.763991] md: md8: recovery done.
[ 156.777476] md: bind<sdh2>
[ 156.856278] RAID1 conf printout:
[ 156.856287] --- wd:2 rd:2
[ 156.856296] disk 0, wo:0, o:1, dev:sda2
[ 156.856303] disk 1, wo:0, o:1, dev:sdb2
[ 156.866801] RAID1 conf printout:
[ 156.866810] --- wd:2 rd:2
[ 156.866817] disk 0, wo:0, o:1, dev:sda2
[ 156.866822] disk 1, wo:0, o:1, dev:sdb2
[ 156.866826] RAID1 conf printout:
[ 156.866830] --- wd:2 rd:2
[ 156.866834] disk 0, wo:0, o:1, dev:sda2
[ 156.866839] disk 1, wo:0, o:1, dev:sdb2
[ 156.866843] RAID1 conf printout:
[ 156.866846] --- wd:2 rd:2
[ 156.866851] disk 0, wo:0, o:1, dev:sda2
[ 156.866855] disk 1, wo:0, o:1, dev:sdb2
[ 156.866859] RAID1 conf printout:
[ 156.866863] --- wd:2 rd:2
[ 156.866867] disk 0, wo:0, o:1, dev:sda2
[ 156.866872] disk 1, wo:0, o:1, dev:sdb2
[ 156.866876] RAID1 conf printout:
[ 156.866879] --- wd:2 rd:2
[ 156.866884] disk 0, wo:0, o:1, dev:sda2
[ 156.866888] disk 1, wo:0, o:1, dev:sdb2
[ 156.866892] RAID1 conf printout:
[ 156.866896] --- wd:2 rd:2
[ 156.866900] disk 0, wo:0, o:1, dev:sda2
[ 156.866905] disk 1, wo:0, o:1, dev:sdb2
[ 157.104350] md0: unknown partition table
[ 161.407199] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5
[ 161.407203] Contact linux-ext4@vger.kernel.org if you think we should keep it.
[ 161.407206]
[ 161.876188] EXT4-fs (md0): ext4_check_descriptors: Checksum for group 48512 failed (61806!=50672)
[ 161.878432] EXT4-fs (md0): group descriptors corrupted!
(..)
[ 242.196178] Loading iSCSI transport class v2.0-871.
[ 242.215418] iscsi: registered transport (tcp)
[ 242.244271] iscsid (8167): /proc/8167/oom_adj is deprecated, please use /proc/8167/oom_score_adj instead.
[ 1016.259972] EXT4-fs (md0): ext4_check_descriptors: Checksum for group 48512 failed (61806!=50672)
[ 1016.262518] EXT4-fs (md0): group descriptors corrupted!
[ 1033.220749] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 1064.405138] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 5706.867910] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)

[~] # /usr/local/sbin/dumpe2fs /dev/md0 | grep superblock
dumpe2fs 1.41.4 (27-Jan-2009)
/usr/local/sbin/dumpe2fs: The ext2 superblock is corrupt while trying to open /dev/md0
Couldn't find valid filesystem superblock.

---
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Volume in state "unmounted" - all data lost !

Post by pwilson »

graef wrote:My volume suddenly went in state "unmounted" and the share are not accessible anymore. There is no way to mount the volume again ? All data is lost !

I just created a RAID6 volume with 16(24) TB on my QNAP TS-859Pro+ and started to copy a few TB of data to it. The NAS with its dual-core Atom CPU suffered from high load about 2 days and then suddenly the RSYNC copy jobs stopped with failure. The error message was, that the target is not reachable anymore. At this point the volume and shares were still visible in the admin UI, but not accessible via file share. Then I rebooted the NAS and the volume was shown "unmounted".

There is no functionality in the admin UI to mount the volume again. This means, that all data is lost !?!

This happened about 3 times during the last 2 years. It always happened when the NAS was exposed to high load, e.g. several simultaneous copy jobs running in parallel.

Below the output of a few SSH console commands. Maybe somebody recognizes a possible cause ?

[~] # more /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid6 sda3[0] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
17572185216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sdb4[0] sda4[7] sdh4[6] sdg4[5] sdf4[4] sde4[3] sdd4[2] sdc4[1]
458880 blocks [8/8] [UUUUUUUU]
bitmap: 1/57 pages [4KB], 4KB chunk
md9 : active raid1 sdb1[0] sda1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1]
530048 blocks [8/8] [UUUUUUUU]
bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>

[~] # mount /dev/md0 /share/MD0_DATA -t ext4
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

e2fsck_64 /dev/md0
(...)
Group descriptor 134061 checksum is invalid. FIXED.
Group descriptor 134062 checksum is invalid. FIXED.
Group descriptor 134063 checksum is invalid. FIXED.
Group descriptor 134064 checksum is invalid. FIXED.
Group descriptor 134065 checksum is invalid. FIXED.
/dev/md0 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Error allocating block bitmap (4): Memory allocation failed
e2fsck: aborted

[~] # dmesg | tail -200
[ 137.565434] md: bind<sda2>
[ 137.571124] md/raid1:md8: active with 1 out of 1 mirrors
[ 137.575348] md8: detected capacity change from 0 to 542769152
[ 138.588622] md8: unknown partition table
[ 140.642941] Adding 530044k swap on /dev/md8. Priority:-1 extents:1 across:530044k
[ 144.189898] md: bind<sdb2>
[ 144.209310] RAID1 conf printout:
[ 144.209320] --- wd:1 rd:2
[ 144.209328] disk 0, wo:0, o:1, dev:sda2
[ 144.209335] disk 1, wo:1, o:1, dev:sdb2
[ 144.209461] md: recovery of RAID array md8
[ 144.212722] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 144.216034] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 144.219456] md: using 128k window, over a total of 530048k.
[ 146.264471] md: bind<sdc2>
[ 148.324908] md: bind<sdd2>
[ 150.456198] md: bind<sde2>
[ 152.560238] md: bind<sdf2>
[ 154.625218] md: bind<sdg2>
[ 155.537838] md: md0 stopped.
[ 155.557976] md: md0 stopped.
[ 155.716476] md: bind<sdb3>
[ 155.719787] md: bind<sdc3>
[ 155.722997] md: bind<sdd3>
[ 155.726209] md: bind<sde3>
[ 155.729369] md: bind<sdf3>
[ 155.732432] md: bind<sdg3>
[ 155.735396] md: bind<sdh3>
[ 155.738202] md: bind<sda3>
[ 155.742191] md/raid:md0: device sda3 operational as raid disk 0
[ 155.744860] md/raid:md0: device sdh3 operational as raid disk 7
[ 155.747416] md/raid:md0: device sdg3 operational as raid disk 6
[ 155.749880] md/raid:md0: device sdf3 operational as raid disk 5
[ 155.752293] md/raid:md0: device sde3 operational as raid disk 4
[ 155.754653] md/raid:md0: device sdd3 operational as raid disk 3
[ 155.756951] md/raid:md0: device sdc3 operational as raid disk 2
[ 155.759252] md/raid:md0: device sdb3 operational as raid disk 1
[ 155.781976] md/raid:md0: allocated 136320kB
[ 155.784405] md/raid:md0: raid level 6 active with 8 out of 8 devices, algorithm 2
[ 155.786938] RAID conf printout:
[ 155.786943] --- level:6 rd:8 wd:8
[ 155.786949] disk 0, o:1, dev:sda3
[ 155.786954] disk 1, o:1, dev:sdb3
[ 155.786959] disk 2, o:1, dev:sdc3
[ 155.786964] disk 3, o:1, dev:sdd3
[ 155.786969] disk 4, o:1, dev:sde3
[ 155.786973] disk 5, o:1, dev:sdf3
[ 155.786978] disk 6, o:1, dev:sdg3
[ 155.786983] disk 7, o:1, dev:sdh3
[ 155.787080] md0: detected capacity change from 0 to 17993917661184
[ 156.763991] md: md8: recovery done.
[ 156.777476] md: bind<sdh2>
[ 156.856278] RAID1 conf printout:
[ 156.856287] --- wd:2 rd:2
[ 156.856296] disk 0, wo:0, o:1, dev:sda2
[ 156.856303] disk 1, wo:0, o:1, dev:sdb2
[ 156.866801] RAID1 conf printout:
[ 156.866810] --- wd:2 rd:2
[ 156.866817] disk 0, wo:0, o:1, dev:sda2
[ 156.866822] disk 1, wo:0, o:1, dev:sdb2
[ 156.866826] RAID1 conf printout:
[ 156.866830] --- wd:2 rd:2
[ 156.866834] disk 0, wo:0, o:1, dev:sda2
[ 156.866839] disk 1, wo:0, o:1, dev:sdb2
[ 156.866843] RAID1 conf printout:
[ 156.866846] --- wd:2 rd:2
[ 156.866851] disk 0, wo:0, o:1, dev:sda2
[ 156.866855] disk 1, wo:0, o:1, dev:sdb2
[ 156.866859] RAID1 conf printout:
[ 156.866863] --- wd:2 rd:2
[ 156.866867] disk 0, wo:0, o:1, dev:sda2
[ 156.866872] disk 1, wo:0, o:1, dev:sdb2
[ 156.866876] RAID1 conf printout:
[ 156.866879] --- wd:2 rd:2
[ 156.866884] disk 0, wo:0, o:1, dev:sda2
[ 156.866888] disk 1, wo:0, o:1, dev:sdb2
[ 156.866892] RAID1 conf printout:
[ 156.866896] --- wd:2 rd:2
[ 156.866900] disk 0, wo:0, o:1, dev:sda2
[ 156.866905] disk 1, wo:0, o:1, dev:sdb2
[ 157.104350] md0: unknown partition table
[ 161.407199] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5
[ 161.407203] Contact linux-ext4@vger.kernel.org if you think we should keep it.
[ 161.407206]
[ 161.876188] EXT4-fs (md0): ext4_check_descriptors: Checksum for group 48512 failed (61806!=50672)
[ 161.878432] EXT4-fs (md0): group descriptors corrupted!
(..)
[ 242.196178] Loading iSCSI transport class v2.0-871.
[ 242.215418] iscsi: registered transport (tcp)
[ 242.244271] iscsid (8167): /proc/8167/oom_adj is deprecated, please use /proc/8167/oom_score_adj instead.
[ 1016.259972] EXT4-fs (md0): ext4_check_descriptors: Checksum for group 48512 failed (61806!=50672)
[ 1016.262518] EXT4-fs (md0): group descriptors corrupted!
[ 1033.220749] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 1064.405138] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 5706.867910] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)

[~] # /usr/local/sbin/dumpe2fs /dev/md0 | grep superblock
dumpe2fs 1.41.4 (27-Jan-2009)
/usr/local/sbin/dumpe2fs: The ext2 superblock is corrupt while trying to open /dev/md0
Couldn't find valid filesystem superblock.

---
QTS Firmware Version/Build numbers?
Drive Models involved?

:roll: :roll: :roll: :roll:

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
graef
New here
Posts: 6
Joined: Fri Sep 07, 2012 1:34 pm

Re: Volume in state "unmounted" - all data lost !

Post by graef »

QNAP TS-859 Pro+
Firmware 4.0.7
8x SEAGATE ST33000650NS 3TB


Hello Mr.Wilson,
thank you for the quick initial reaction.
Cheers, Graef
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Volume in state "unmounted" - all data lost !

Post by pwilson »

graef wrote:QNAP TS-859 Pro+
Firmware 4.0.7
8x SEAGATE ST33000650NS 3TB


Hello Mr.Wilson,
thank you for the quick initial reaction.
Cheers, Graef
Upgrading to current Firmware would give you a 64bit Kernel etc, while I believe your QTS v4.0.7 is still only a 32bit kernel. Your drives are definitely listed as "compatible", so I don't understand the log entries:

Code: Select all

[ 1033.220749] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 1064.405138] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 5706.867910] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
TS-859Pro+ Recommended HDD List wrote: Seagate - ST33000650NS Compatible TS-859 Pro+
  • (3TB & 4TB HDDs)
    Not applicable to TS-509 Pro. TS-639 Pro does not support >16TB disk volume.
I would recommend submitting a ticket with the QNAP Helpdesk for further assistance.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
graef
New here
Posts: 6
Joined: Fri Sep 07, 2012 1:34 pm

Re: Volume in state "unmounted" - all data lost !

Post by graef »

Hello Mr.Wilson,

yes, I already opened a ticket for support but the reaction is not satisfying. Only statements off the topic or complaints about me opening two tickets for two similar but different issues. The output from support did not convince me yet.

Do you think the 64bit Kernel in 4.1 can handle high load more efficiently and therefore the software RAID does not collapse ?!?

By the way - it is well known with many IT personnel that software RAID is risky regarding stability because trouble on software level can impact consistency. Are there any NAS from QNAP that run RAID6 with a hardware controller ? I know that a performing hardware RAID6 controller is an expensive component.

Cheers,
Graef
User avatar
pwilson
Guru
Posts: 22533
Joined: Fri Mar 06, 2009 11:20 am
Location: Victoria, BC, Canada (UTC-08:00)

Re: Volume in state "unmounted" - all data lost !

Post by pwilson »

graef wrote:Hello Mr.Wilson,

yes, I already opened a ticket for support but the reaction is not satisfying. Only statements off the topic or complaints about me opening two tickets for two similar but different issues. The output from support did not convince me yet.

Do you think the 64bit Kernel in 4.1 can handle high load more efficiently and therefore the software RAID does not collapse ?!?

By the way - it is well known with many IT personnel that software RAID is risky regarding stability because trouble on software level can impact consistency. Are there any NAS from QNAP that run RAID6 with a hardware controller ? I know that a performing hardware RAID6 controller is an expensive component.

Cheers,
Graef
64bit kernel would permit better RAM management so it would likely help. This however does not address those error messages from your "dmesg" output I quoted in my last response to you, so I again urge you to submit a ticket with the QNAP Helpdesk for further assistance.

You should not attempt to upgrade the Firmware until the "unmounted" RAID array issue is resolved. Please submit a ticket.

Patrick M. Wilson
Victoria, BC Canada
QNAP TS-470 Pro w/ 4 * Western Digital WD30EFRX WD Reds (RAID5) - - Single 8.1TB Storage Pool FW: QTS 4.2.0 Build 20151023 - Kali Linux v1.06 (64bit)
Forums: View My Profile - Search My Posts - View My Photo - View My Location - Top Community Posters
QNAP: Turbo NAS User Manual - QNAP Wiki - QNAP Tutorials - QNAP FAQs

Please review: When you're asking a question, please include the following.
graef
New here
Posts: 6
Joined: Fri Sep 07, 2012 1:34 pm

Re: Volume in state "unmounted" - all data lost !

Post by graef »

Helpdesk failed. They provided some old weired Web documents that did not led to a recovery of the data. All data lost.

1. Recreate a new empty volume.

2. Buy a stronger high end QNAP with Storage Pool technology and hope that there the same issue will not reoccur.
reinob

Re: Volume in state "unmounted" - all data lost !

Post by reinob »

pwilson wrote: Upgrading to current Firmware would give you a 64bit Kernel etc, while I believe your QTS v4.0.7 is still only a 32bit kernel. Your drives are definitely listed as "compatible", so I don't understand the log entries:

Code: Select all

[ 1033.220749] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 1064.405138] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
[ 5706.867910] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (2c0)
That just means that ext3 cannot mount the drives because actually they use ext4. But ext4 fails because of corruption.

I guess the OP needs to find a way to run e2fsck without running out of RAM. This is actually a really bad situation.
Post Reply

Return to “System & Disk Volume Management”