Retrive DATA from broken QNAP Raid 1

Questions about SNMP, Power, System, Logs, disk, & RAID.
LainX
New here
Posts: 7
Joined: Fri Jul 08, 2016 7:16 pm

Retrive DATA from broken QNAP Raid 1

Post by LainX »

Hi,

here a help for who, like me, need to retrive DATA from broken Qnap with RAID-1.

WARNING: Carry out the following commands under your responsibility. I do not hold myself responsible if you lose your data. Check with QNAP first if there is a way to repair your QNAP Nas and then access the data on the disks normally.

In my case, QNAP model is TS-251+ with Linux Software RAID + LVM2 and FileSystem is EXT4 :

1 - Verify 'dev' about attached HDD (USB or SATA). The bigger partition is where QNAP place your data :

Code: Select all

fdisk -l

Disk /dev/sdb: 2,75 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: EFRX-68AX9N0    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AEFC8B50-1DBC-46F6-844F-60D7BD2704AB

Dispositivo      Start       Fine    Settori   Size Tipo
/dev/sdb1           40    1060289    1060250 517,7M Microsoft basic data
/dev/sdb2      1060296    2120579    1060284 517,7M Microsoft basic data
/dev/sdb3      2120584 5842744109 5840623526   2,7T Microsoft basic data <<<<<<
/dev/sdb4   5842744112 5843804399    1060288 517,7M Microsoft basic data
/dev/sdb5   5843804408 5860511999   16707592     8G Microsoft basic data
2 - Check the RAID metadata if is accessible :

Code: Select all

mdadm -E /dev/sdb3

/dev/sdb3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 77976160:8f893135:eb6cdd71:733bb188
           Name : 1
  Creation Time : Mon Jun 22 01:07:33 2020
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5840623240 (2785.03 GiB 2990.40 GB)
     Array Size : 2920311616 (2785.03 GiB 2990.40 GB)
  Used Dev Size : 5840623232 (2785.03 GiB 2990.40 GB)
   Super Offset : 5840623504 sectors
   Unused Space : before=0 sectors, after=264 sectors
          State : clean
    Device UUID : a997883d:504f6cc2:bd13a2e1:3c1e2e92

    Update Time : Mon Aug 24 12:11:56 2020
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 7fb23406 - correct
         Events : 708


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
3 - Verify the "/dev/mdX" still in use for your partition :

Code: Select all

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : active raid1 sdb3[1] <<<<<<
      2920311616 blocks super 1.0 [2/1] [_U]
      
md322 : active (auto-read-only) raid1 sdb5[1]
      7235136 blocks super 1.0 [2/1] [_U]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active (auto-read-only) raid1 sdb2[1]
      530112 blocks super 1.0 [2/1] [_U]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md9 : active (auto-read-only) raid1 sdb1[1]
      530048 blocks super 1.0 [64/1] [_U]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md13 : active (auto-read-only) raid1 sdb4[1]
      458880 blocks super 1.0 [64/1] [_U]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
4 - Stop all activity over MD device :

Code: Select all

mdadm --stop /dev/md127
5 - Mount RAID-1 with single disk only :

Code: Select all

mdadm -A -R /dev/md127 /dev/sdb3
mdadm: /dev/md127 has been started with 1 drive.

6 - Verify LVM2 and grab volume/logical drive : 

root@ubuntu:~# pvscan 
  PV /dev/md127   VG vg288           lvm2 [<2,72 TiB / 0    free]
  Total: 1 [<2,72 TiB] / in use: 1 [<2,72 TiB] / in no VG: 0 [0   ]

root@ubuntu:~# vgdisplay 
  --- Volume group ---
  VG Name               vg288
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  21
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <2,72 TiB
  PE Size               4,00 MiB
  Total PE              712966
  Alloc PE / Size       712966 / <2,72 TiB
  Free  PE / Size       0 / 0   
  VG UUID               E9eFWd-WRmR-C8W6-YSu4-tSgA-1WnT-P4eXqX
   
root@ubuntu:~# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               vg288
  PV Size               <2,72 TiB / not usable <1,78 MiB
  Allocatable           yes (but full)
  PE Size               4,00 MiB
  Total PE              712966
  Free PE               0
  Allocated PE          712966
  PV UUID               lkTOn4-W5Ni-kjaJ-5ok0-DjbA-h5cc-1xtN0J

root@ubuntu:~# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/vg288/lv544
  LV Name                lv544
  VG Name                vg288
  LV UUID                4KMo6U-AfNV-amSL-Th2U-UZoZ-qKQW-n0EtKz
  LV Write Access        read/write
  LV Creation host, time Qnap, 2020-06-22 01:07:36 +0200
  LV Status              NOT available
  LV Size                <27,90 GiB
  Current LE             7142
  Segments               2
  Allocation             inherit
  Read ahead sectors     8192
   
  --- Logical volume ---
  LV Path                /dev/vg288/lv1
  LV Name                lv1
  VG Name                vg288
  LV UUID                CYGw0J-01iU-e9vv-GxZe-nWg9-kPL4-DOn3mP
  LV Write Access        read/write
  LV Creation host, time Qnap, 2020-06-22 01:07:47 +0200
  LV Status              NOT available
  LV Size                2,69 TiB
  Current LE             705824
  Segments               1
  Allocation             inherit
  Read ahead sectors     8192

NB : If, after the command pvscan appear the following error 'WARNING: PV /dev/md127 in VG vg288 is using an old PV header, modify the VG to update' do the following command :

Code: Select all

vgck --updatemetadata vg288
7 - Before mount LVM2 volume, perform the following command :

Code: Select all

vgchange -a y

root@ubuntu:~# vgchange -a y
  2 logical volume(s) in volume group "vg288" now active

8 - Mount volume :

Code: Select all

mkdir /media/x-qnap
mount /dev/vg288/lv1 /media/x-qnap/
chmod 0777 /media/x-qnap
Now you can access to your data.
Hope that can be helpful.

Bye!
attre
First post
Posts: 1
Joined: Sat May 15, 2021 3:46 pm

Re: Retrive DATA from broken QNAP Raid 1

Post by attre »

Hello.
I have one question. Which Linux distribution you used ?
LainX
New here
Posts: 7
Joined: Fri Jul 08, 2016 7:16 pm

Re: Retrive DATA from broken QNAP Raid 1

Post by LainX »

If I don't remember at the time I was using Ubuntu while now Manjaro (a distro based on Arch Linux).
chripopper
New here
Posts: 6
Joined: Mon Aug 12, 2019 4:55 pm

Re: Retrive DATA from broken QNAP Raid 1

Post by chripopper »

This is awesome, hooked up to my raspberry pi 4 running buster, no problems. Thank you very much for this!
LainX
New here
Posts: 7
Joined: Fri Jul 08, 2016 7:16 pm

Re: Retrive DATA from broken QNAP Raid 1

Post by LainX »

chripopper wrote: Mon Jul 12, 2021 3:55 am This is awesome, hooked up to my raspberry pi 4 running buster, no problems. Thank you very much for this!
You're Welcome :D
zbigmazu
New here
Posts: 2
Joined: Thu Mar 03, 2022 2:41 am

Re: Retrive DATA from broken QNAP Raid 1

Post by zbigmazu »

Hello LainX

I am trying to access the data on the disk from Qnap according to your suggestion and it looks a bit different for me. Logs below. Can you tell me what I can do to get to the data?
1.
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 00530C06-D7C2-494C-91C6-881273B49457

Device Start End Sectors Size Type
/dev/sdb1 40 1060289 1060250 517.7M Microsoft basic data
/dev/sdb2 1060296 2120579 1060284 517.7M Microsoft basic data
/dev/sdb3 2120584 5842744109 5840623526 2.7T Microsoft basic data
/dev/sdb4 5842744112 5843804399 1060288 517.7M Microsoft basic data
/dev/sdb5 5843804408 5860511999 16707592 8G Microsoft basic data

2.
mdadm -E /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 8adc389b:ac9f9171:fc130280:364c17ea
Name : 1
Creation Time : Sat Apr 22 08:37:46 2017
Raid Level : raid1
Raid Devices : 2

Avail Dev Size : 5840623240 (2785.03 GiB 2990.40 GB)
Array Size : 2920311616 (2785.03 GiB 2990.40 GB)
Used Dev Size : 5840623232 (2785.03 GiB 2990.40 GB)
Super Offset : 5840623504 sectors
Unused Space : before=0 sectors, after=264 sectors
State : clean
Device UUID : e7a257a7:ed90c779:4c763bc3:0dfe70be

Update Time : Fri Dec 3 13:40:12 2021
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 91ec1082 - correct
Events : 1146


Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

3. All partition is inactive
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md123 : inactive sdb3[1](S)
2920311620 blocks super 1.0

md124 : inactive sdb2[1](S)
530124 blocks super 1.0

md125 : inactive sdb1[1](S)
530108 blocks super 1.0

md126 : inactive sdb5[1](S)
8353780 blocks super 1.0

md127 : inactive sdb4[1](S)
530128 blocks super 1.0

4.
mdadm --stop /dev/md123
mdadm: stopped /dev/md123

mdadm -A -R /dev/md123 /dev/sdb3
mdadm: /dev/md123 has been started with 1 drive (out of 2).

pvscan
No matching physical volumes found

5. mdadm --examine --scan >> /etc/mdadm/mdadm.conf

mdadm --stop /dev/md123
mdadm: stopped /dev/md123
mdadm -A -R /dev/md123 /dev/sdb3
mdadm: /dev/md123 has been started with 1 drive (out of 2).

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md123 : active (auto-read-only) raid1 sdb3[1]
2920311616 blocks super 1.0 [2/1] [_U]

md124 : inactive sdb2[1](S)
530124 blocks super 1.0

md125 : inactive sdb1[1](S)
530108 blocks super 1.0

md126 : inactive sdb5[1](S)
8353780 blocks super 1.0

md127 : inactive sdb4[1](S)
530128 blocks super 1.0

unused devices: <none>

pvscan
No matching physical volumes found

After this command md123 is active but pvscan still is empty an can't run your point 7
What else can be done to get to the data?
User avatar
dolbyman
Guru
Posts: 35017
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Retrive DATA from broken QNAP Raid 1

Post by dolbyman »

NAS model the drive is from ?
zbigmazu
New here
Posts: 2
Joined: Thu Mar 03, 2022 2:41 am

Re: Retrive DATA from broken QNAP Raid 1

Post by zbigmazu »

HS-251+
I have no information what version of the software was running. The NAS has a damaged motherboard and will not boot. There were 2 disks in RAID 1
User avatar
dolbyman
Guru
Posts: 35017
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Retrive DATA from broken QNAP Raid 1

Post by dolbyman »

fix it with 100Ohm trick...as it's probably the LPC issue

next time have external backups at all times
nas_user_23
New here
Posts: 3
Joined: Sat Oct 01, 2022 7:20 am

Re: Retrive DATA from broken QNAP Raid 1

Post by nas_user_23 »

I know this is an old thread, but I figured I would post here and ask the question just in case.

My QNAP crashed and I had a RAID1 configured with pretty much default settings. I am now trying to follow the instructions in this thread to attempt to mount one of the drives that was in the QNAP on my Ubuntu machine. I tried to follow the instructions exactly, but I had to deviate a little.

First, I was unable to start the first partition without stopping /dev/md9, /dev/md1, /dev/md256, /dev/md322, /dev/md13. So I stopped all of them and then started "/dev/md9" using the following command:

# mdadm -A -R /dev/md9 /dev/sdb3

That seemed to work.

Then I attempted to run:
# pvscan
WARNING: Unrecognized segment type tier-thin-pool
WARNING: Unrecognized segment type thick
WARNING: PV /dev/md9 in VG vg1 is using an old PV header...

So I attempted to update the PV header as instructed:
# vgck --updatemetadata vg1
WARNING: Unrecognised segment type tier-thin-pool
WARNING: Unrecognised segment type thick
WARNING: PV /dev/md9 in VG vg1 is using an old PV header, modify the VG to update.
LV tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
Internal error: LV segments corrupted in tp1.
Cannot process volume group vg1

Is there anything I can do?
User avatar
dolbyman
Guru
Posts: 35017
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Retrive DATA from broken QNAP Raid 1

Post by dolbyman »

what qnap model was it ?

md for data should be md1 for the first pool/volume ..not md9

also ..no backups? (a raid is NOT a backup!)
User avatar
Moogle Stiltzkin
Guru
Posts: 11448
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: Retrive DATA from broken QNAP Raid 1

Post by Moogle Stiltzkin »

here a help for who, like me, need to retrive DATA from broken Qnap with RAID-1.
Is there anything I can do?
wrong. the first thing to do is have a backup plan and act on it. so when things like this happens, then you can easily (well it's inconvenient and time consuming, but you're highly likely to recover your data from this approach. like for example i do 4x 4tb raid5, takes me less than 1-2 days to recover everything) you just recover from the backup.

if u don't do the proper care for managing/storing your data (as in having a backup plan. doing a raid scrub each month if ur using raid...not exposing your nas online and so on....) and something happens......

you'll have to come to the forum or helpdesk and ask for some miracle. or end up going to some recovery service where they are happy to charge you a bomb for a chance (no guarantee) for some if not an exact full recovery of your data.

moral of story, to save time, money and stress, just simply have a backup plan :/ raid is not a backup
https://www.reddit.com/r/qnap/comments/ ... _a_backup/

interesting recovery guide by ts, but i wouldn't use it as my first go to :S but if thats what ur plan is (because u don't use backups) then u may want to reconsider.
NAS
[Main Server] QNAP TS-877 (QTS) w. 4tb [ 3x HGST Deskstar NAS & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A (Truenas Core) w. 4x 2TB Samsung F3 (HD203WI) RaidZ1 ZFS + 8gb ddr3 Crucial
[^] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) 2x 4TB Seagate Ironwolf, Raid5
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-253D (Truenas Scale)
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Qotom Pfsense|100mbps FTTH | Win11, Ryzen 5600X Desktop (1x2tb Crucial P50 Plus M.2 SSD, 1x 8tb seagate Ironwolf,1x 4tb HGST Ultrastar 7K4000)


Resources
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin
nas_user_23
New here
Posts: 3
Joined: Sat Oct 01, 2022 7:20 am

Re: Retrive DATA from broken QNAP Raid 1

Post by nas_user_23 »

It was a TS-451.

I know I was supposed to back up. I'm not going to recount my whole history here.

Is there any point in doing a RAID1 configuration if you can't recover data? It sounds like QNAP has just made their partitioning scheme freakishly complex such that even though the data is present, but you have to get lucky to get the configuration correct to mount the drive in another system.

If I buy another QNAP NAS, would these drives just work in the new system?

Thanks.
nas_user_23
New here
Posts: 3
Joined: Sat Oct 01, 2022 7:20 am

Re: Retrive DATA from broken QNAP Raid 1

Post by nas_user_23 »

Also /dev/md9 was selected automatically by Ubuntu when I connected the drive to the system. I'm not sure why /dev/md9 was chosen as the first partition and not /dev/md1.
User avatar
dolbyman
Guru
Posts: 35017
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Retrive DATA from broken QNAP Raid 1

Post by dolbyman »

as said, raid is not a backup...so recovery from internal disks should never be needed

and why QNAP chose a non standard method to implement LVM, I do not know, but maybe the implementation of snapshots,cache,etc has to do with it
Locked

Return to “System & Disk Volume Management”