QNAP support says my RAID10 isn't recoverable - What to do?

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
wojcieh
New here
Posts: 5
Joined: Sat Jun 25, 2016 2:25 pm
Contact:

QNAP support says my RAID10 isn't recoverable - What to do?

Post by wojcieh »

He folks.

I have TS-873 with 4x12TB and 4x10TB RAID10 for both configured. I had as well RAID0 SSD cache enabled for both volumes. During the recent Firmware upgrade my SSD cache was broken. I wasn't able to delete/remove or do any operation on it. On Volume2 filesystem check was forced and went without any issues. After shutting NAS down and removing power cable I was able to delete SSD cache and remove it from both volumes as active. Regardless of cause of issue now I have issues with filesystem on Volume1. Of course the one I have most important data. I have backups of photos and some other stuff. Still ~7TiB of data might go to trash if I don't recover it. I have part of backup in cloud and on 8TiB HDD. I had as well 2 weeks of snapshots. All gone to trash.

My all drives have good smart stats. No warnings whatsoever.
Support and myself (from GUI) run filesystem check man times and it was always stuck on the same inode. Once it was running for about 10 days still stuck on the same inode. Speed of the check was a joke - 1KB/s. I didn't have issues with ram limits. However I did add additional swap on USB stick. No speed increase at all.

Image

Now e2fsck_64 runs with following options

Code: Select all

e2fsck_64 -C 0 -fy -N /dev/mapper/cachedev1 /dev/mapper/cachedev1
Current status

Code: Select all

DataVol1: Inode 2097230 block 1011353342 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 923796846 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 661128981 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 950010972 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 6291569 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 960495744 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 958398560 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 1049627804 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 1013451231 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 79693532 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 192 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 208 conflicts with critical metadata, skipping block checks.
DataVol1: Inode 2097230 block 48 conflicts with critical metadata, skipping block checks.
Some info on my volumes.

Code: Select all

cat /proc/mdstat
md3 : active raid10 sdd3[0] sde3[3] sdf3[2] sdc3[1]
      19512966144 blocks super 1.0 512K chunks 2 near-copies [4/4] [UUUU]
md2 : active raid10 sdh3[0] sdi3[3] sdj3[2] sdg3[1]
      23417852928 blocks super 1.0 512K chunks 2 near-copies [4/4] [UUUU]
md3

Code: Select all

sudo mdadm --detail /dev/md3
/dev/md3:
        Version : 1.0
  Creation Time : Tue Dec 15 11:54:40 2020
     Raid Level : raid10
     Array Size : 19512966144 (18609.02 GiB 19981.28 GB)
  Used Dev Size : 9756483072 (9304.51 GiB 9990.64 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Aug 31 12:37:39 2021
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : 3
           UUID : a785b772:da5b9cb1:25eb7e1f:585c7599
         Events : 556

    Number   Major   Minor   RaidDevice State
       0       8       51        0      active sync set-A   /dev/sdd3
       1       8       35        1      active sync set-B   /dev/sdc3
       2       8       83        2      active sync set-A   /dev/sdf3
       3       8       67        3      active sync set-B   /dev/sde3
md2

Code: Select all

sudo mdadm --detail /dev/md2
/dev/md2:
        Version : 1.0
  Creation Time : Thu Dec 10 17:51:30 2020
     Raid Level : raid10
     Array Size : 23417852928 (22333.01 GiB 23979.88 GB)
  Used Dev Size : 11708926464 (11166.50 GiB 11989.94 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Aug 31 08:00:44 2021
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : 2
           UUID : 33979b11:47176f91:decf077e:56b2a078
         Events : 311

    Number   Major   Minor   RaidDevice State
       0       8      115        0      active sync set-A   /dev/sdh3
       1       8       99        1      active sync set-B   /dev/sdg3
       2       8      147        2      active sync set-A   /dev/sdj3
       3       8      131        3      active sync set-B   /dev/sdi3
I asked support for a command to mount the volume in read-only mode so I could copy data.
Unfortunately, it didn't work.

Code: Select all

mount -t ext4 /dev/mapper/cachedev1 /share/CACHEDEV1_DATA -o ro,noload
mount: wrong fs type, bad option, bad superblock on /dev/mapper/cachedev1,
missing codepage or other error
Error from dmesg

Code: Select all

[ 2250.754713] EXT4-fs (dm-0): ext4_check_descriptors: Block bitmap for group 16320 not in group (block 2389265164093408059)!
[ 2250.765758] EXT4-fs (dm-0): group descriptors corrupted!
What are my options now?
  • Can I mount the drives under the linux?
  • Do I need to mount all four drives or just two?
  • How can I force mount of the volume even if there is an error?
benjuzzo
New here
Posts: 2
Joined: Wed Oct 27, 2021 3:13 pm

Re: QNAP support says my RAID10 isn't recoverable - What to do?

Post by benjuzzo »

We have the same problem after a firmware update but anybody answer to our ticket.
We have RAID 5 with 12 bay
benjuzzo
New here
Posts: 2
Joined: Wed Oct 27, 2021 3:13 pm

Re: QNAP support says my RAID10 isn't recoverable - What to do?

Post by benjuzzo »

We tried but our system is still working from 3 days (80TB), why?
Post Reply

Return to “System & Disk Volume Management”