Recover data from QNAP raid 5 with Ubuntu

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
NASdaq
New here
Posts: 3
Joined: Wed Jan 27, 2016 1:33 am

Recover data from QNAP raid 5 with Ubuntu

Post by NASdaq » Wed Feb 17, 2021 2:17 pm

My QNAP TS-451+ died. No status LED, flashing LAN, solid blue USB, various solid red or green HDD. What I could easily find is that it is dead, really dead. However I had to scan through countless threads to find a way to recover my data, but none with a complete procedure.

Before the usual RAID is not backup comments, let me say I do have backups. However, when I defined my backup policy, two things I did not planned:
  • I decided not to backup lossy music files if I have the same file lossless. Today, I realise that I don't want to go through recreating the lossy files.
  • My backup strategy missed a few less used folders, the kind that you never access, but would hate to loose (e.g. my past resume, which I only use when I am looking for a job but I would be angry if I would have to start from scratch, archive from past jobs, etc).
So here it is. Disclaimer: tested on the disks of a dead QNAP TS-451+ running QTS 4.5+, with a sane 4 HDD raid 5 array containing a Storage Pool with one volume. For the OS, I used Ubuntu Desktop 20.04 LTS USB boot drive.
  1. Create a Ubuntu bootable USB stick. I used Rufus to create a bootable USB stick with persistent storage.
  2. Find a PC with the required number of SATA ports greater or equal to the number of HDD in your raid 5 array.
  3. Disconnect all existing HDD from the PC and connect the NAS HDD.
  4. Boot from the USB stick and configure as required.
  5. Open a terminal. Note: all commands run as sudo.
Install mdadm.

Code: Select all

sudo apt install mdadm lvm2
...

Code: Select all

sudo mdadm --assemble --scan
mdadm: /dev/md/9 has been started with 4 drives (out of 25).
mdadm: /dev/md/256 assembled from 0 drives and 2 spares - not enough to start the array.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: /dev/md/13 has been started with 4 drives (out of 24).
mdadm: /dev/md/322 assembled from 0 drives and 1 spare - not enough to start the array.
mdadm: /dev/md/256 assembled from 0 drives and 2 spares - not enough to start the array.
mdadm: /dev/md/322 assembled from 0 drives and 1 spare - not enough to start the array.
We have a hint that /dev/md/1 is what were looking for since it's the only one started with 4 drives.

Code: Select all

sudo cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] 
md322 : inactive sdc5[3](S)
      8353780 blocks super 1.0
       
md13 : active raid1 sdb4[0] sdc4[25] sdd4[24] sda4[1]
      458880 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md1 : active raid5 sdb3[0] sdc3[3] sdd3[2] sda3[1]
      11691190848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md256 : inactive sdd2[2](S) sdc2[3](S)
      1060248 blocks super 1.0
       
md9 : active raid1 sdb1[0] sdc1[25] sdd1[24] sda1[1]
      530048 blocks super 1.0 [25/4] [UUUU_____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
md1 is the only one in raid5, and it is the bigger.

Code: Select all

sudo fdisk -l /dev/md1
Disque /dev/md1 : 10,91 TiB, 11971779428352 octets, 23382381696 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 4096 octets
taille d'E/S (minimale / optimale) : 65536 octets / 196608 octets
10.91 TiB is pretty much what I expect from my 4 x 4 TB array.

Code: Select all

sudo mkdir /mnt/raid
sudo mount /dev/md1 /mnt/raid
mount: /mnt/raid: type de système de fichiers « LVM2_member » inconnu.
Bugger, won't mount. Using vgscan:

Code: Select all

sudo vgscan
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  Found volume group "vg1" using metadata type lvm2
Then using lvscan:

Code: Select all

sudo lvscan
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  ACTIVE            '/dev/vg1/lv544' [<111,59 GiB] inherit
  ACTIVE            '/dev/vg1/lv1' [<10,78 TiB] inherit
It looks like /dev/vg1/lv1 is what were looking for after following the trail and given its size.

Code: Select all

sudo mount /dev/vg1/lv1 /mnt/raid
sudo ls /mnt/raid
...
...
Success!!!

To retrieve data, you can use the Ubuntu Desktop file manager to copy file to another external HDD. However rsync is faster.

Code: Select all

sudo rsync -av --stats --exclude '.Qsync' --exclude '@Recycle' --exclude '.streams/' --exclude '.@__thumb/' /mnt/raid/<folder to recover> /media/ubuntu/<your external hdd>/<destination folder>
That's it. Hope it save you the few hours I had to scan through various post on multiple sites!
Last edited by NASdaq on Thu Feb 18, 2021 5:06 am, edited 1 time in total.

User avatar
OneCD
Ask me anything
Posts: 8980
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: Recover data from QNAP raid 5 with Ubuntu

Post by OneCD » Wed Feb 17, 2021 2:37 pm

* made topic sticky *

Nice work! :geek:

ImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImage

Mousetick
Been there, done that
Posts: 949
Joined: Thu Aug 24, 2017 10:28 pm

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Mousetick » Wed Feb 17, 2021 3:50 pm

Nice work indeed. You may want to clarify that you were using a Storage Pool rather than a Static Volume on the RAID group, and that there was only 1 Volume in the Storage Pool. The procedure would need to be (slightly) adapted to account for other scenarios.

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9924
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Moogle Stiltzkin » Thu Feb 18, 2021 8:45 am

https://www.reddit.com/r/qnap/comments/ ... _a_backup/


ty for the tutorial, this is helpful. also sorry this happened to you, i know you did your best with the backup, so kudos. but now that you've put your backup to the test, now you see what you could have done better.

i'm unsure how you missed backing up folders that were least used.... were these folders located on the nas?

for my own backup i would backup shares on my nas i clearly knew should be backed up (i use hybrid backup sync that saves these jobs for share locations to be backedup). infrequently accessed or not, so these would never ever get ommited.

i would only on occassion re-review these infrequently accessed content whether they qualify to be retained or whether to delete them (while at it, i'll also use dupeguru to check if i mistakenly kept multiple copies of contents by mistake, which can easily happen, and thus save on space). but as long as they are in the share, they are still flagged for backup. thats my approach to it. this way you won't accidentally miss anything.
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) 2x 4TB Seagate Ironwolf, Raid5
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-253D
[^] QNAP TS-228
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Qotom Pfsense|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

ITConsultingBroll
First post
Posts: 1
Joined: Wed Mar 10, 2021 5:29 am

Re: Recover data from QNAP raid 5 with Ubuntu

Post by ITConsultingBroll » Thu Mar 11, 2021 6:37 am

Hi. Thats a great guide which helped me a lot so far. Thanks a lot for posting it.
Your instructions helped me to get access to the disks and read the raid information but I face a problem now.

When I do the vgscan and lvscan I receive error messages that the following error messages:

Code: Select all

sudo vgscan
  /dev/sdd: open failed: No medium found
  /dev/sde: open failed: No medium found
  /dev/sdf: open failed: No medium found
  /dev/sdg: open failed: No medium found
  Error reading device /dev/sda2 at 542769152 length 4.
  bcache_invalidate: block (10, 0) still held
  bcache_abort: block (10, 0) still held
  Error reading device /dev/sda2 at 542855168 length 4.
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  LV tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1

Code: Select all

udo lvscan
  /dev/sdd: open failed: No medium found
  /dev/sde: open failed: No medium found
  /dev/sdf: open failed: No medium found
  /dev/sdg: open failed: No medium found
  Error reading device /dev/sda2 at 542769152 length 4.
  bcache_invalidate: block (10, 0) still held
  bcache_abort: block (10, 0) still held
  Error reading device /dev/sda2 at 542855168 length 4.
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  LV tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1
Trying to mount MD1 I receive that error>

Code: Select all

sudo mount /dev/md1 /mnt/raid/
mount: /mnt/raid: unknown filesystem type 'drbd'.

I would be very grateful if someone could help me with that error
Thank you very much, Martin

Post Reply

Return to “System & Disk Volume Management”