Recover data from QNAP raid 5 with Ubuntu

Questions about SNMP, Power, System, Logs, disk, & RAID.
NASdaq
New here
Posts: 3
Joined: Wed Jan 27, 2016 1:33 am

Recover data from QNAP raid 5 with Ubuntu

Post by NASdaq »

My QNAP TS-451+ died. No status LED, flashing LAN, solid blue USB, various solid red or green HDD. What I could easily find is that it is dead, really dead. However I had to scan through countless threads to find a way to recover my data, but none with a complete procedure.

Before the usual RAID is not backup comments, let me say I do have backups. However, when I defined my backup policy, two things I did not planned:
  • I decided not to backup lossy music files if I have the same file lossless. Today, I realise that I don't want to go through recreating the lossy files.
  • My backup strategy missed a few less used folders, the kind that you never access, but would hate to loose (e.g. my past resume, which I only use when I am looking for a job but I would be angry if I would have to start from scratch, archive from past jobs, etc).
So here it is. Disclaimer: tested on the disks of a dead QNAP TS-451+ running QTS 4.5+, with a sane 4 HDD raid 5 array containing a Storage Pool with one volume. For the OS, I used Ubuntu Desktop 20.04 LTS USB boot drive.
  1. Create a Ubuntu bootable USB stick. I used Rufus to create a bootable USB stick with persistent storage.
  2. Find a PC with the required number of SATA ports greater or equal to the number of HDD in your raid 5 array.
  3. Disconnect all existing HDD from the PC and connect the NAS HDD.
  4. Boot from the USB stick and configure as required.
  5. Open a terminal. Note: all commands run as sudo.
Install mdadm.

Code: Select all

sudo apt install mdadm lvm2
...

Code: Select all

sudo mdadm --assemble --scan
mdadm: /dev/md/9 has been started with 4 drives (out of 25).
mdadm: /dev/md/256 assembled from 0 drives and 2 spares - not enough to start the array.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: /dev/md/13 has been started with 4 drives (out of 24).
mdadm: /dev/md/322 assembled from 0 drives and 1 spare - not enough to start the array.
mdadm: /dev/md/256 assembled from 0 drives and 2 spares - not enough to start the array.
mdadm: /dev/md/322 assembled from 0 drives and 1 spare - not enough to start the array.
We have a hint that /dev/md/1 is what were looking for since it's the only one started with 4 drives.

Code: Select all

sudo cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] 
md322 : inactive sdc5[3](S)
      8353780 blocks super 1.0
       
md13 : active raid1 sdb4[0] sdc4[25] sdd4[24] sda4[1]
      458880 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md1 : active raid5 sdb3[0] sdc3[3] sdd3[2] sda3[1]
      11691190848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md256 : inactive sdd2[2](S) sdc2[3](S)
      1060248 blocks super 1.0
       
md9 : active raid1 sdb1[0] sdc1[25] sdd1[24] sda1[1]
      530048 blocks super 1.0 [25/4] [UUUU_____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
md1 is the only one in raid5, and it is the bigger.

Code: Select all

sudo fdisk -l /dev/md1
Disque /dev/md1 : 10,91 TiB, 11971779428352 octets, 23382381696 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 4096 octets
taille d'E/S (minimale / optimale) : 65536 octets / 196608 octets
10.91 TiB is pretty much what I expect from my 4 x 4 TB array.

Code: Select all

sudo mkdir /mnt/raid
sudo mount /dev/md1 /mnt/raid
mount: /mnt/raid: type de système de fichiers « LVM2_member » inconnu.
Bugger, won't mount. Using vgscan:

Code: Select all

sudo vgscan
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  Found volume group "vg1" using metadata type lvm2
Then using lvscan:

Code: Select all

sudo lvscan
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  ACTIVE            '/dev/vg1/lv544' [<111,59 GiB] inherit
  ACTIVE            '/dev/vg1/lv1' [<10,78 TiB] inherit
It looks like /dev/vg1/lv1 is what were looking for after following the trail and given its size.

Code: Select all

sudo mount /dev/vg1/lv1 /mnt/raid
sudo ls /mnt/raid
...
...
Success!!!

To retrieve data, you can use the Ubuntu Desktop file manager to copy file to another external HDD. However rsync is faster.

Code: Select all

sudo rsync -av --stats --exclude '.Qsync' --exclude '@Recycle' --exclude '.streams/' --exclude '.@__thumb/' /mnt/raid/<folder to recover> /media/ubuntu/<your external hdd>/<destination folder>
That's it. Hope it save you the few hours I had to scan through various post on multiple sites!
Last edited by NASdaq on Thu Feb 18, 2021 5:06 am, edited 1 time in total.
User avatar
OneCD
Guru
Posts: 12010
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: Recover data from QNAP raid 5 with Ubuntu

Post by OneCD »

* topic made sticky *

Nice work! :geek:

ImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImage
Mousetick
Experience counts
Posts: 1081
Joined: Thu Aug 24, 2017 10:28 pm

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Mousetick »

Nice work indeed. You may want to clarify that you were using a Storage Pool rather than a Static Volume on the RAID group, and that there was only 1 Volume in the Storage Pool. The procedure would need to be (slightly) adapted to account for other scenarios.
User avatar
Moogle Stiltzkin
Guru
Posts: 11448
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Moogle Stiltzkin »

https://www.reddit.com/r/qnap/comments/ ... _a_backup/


ty for the tutorial, this is helpful. also sorry this happened to you, i know you did your best with the backup, so kudos. but now that you've put your backup to the test, now you see what you could have done better.

i'm unsure how you missed backing up folders that were least used.... were these folders located on the nas?

for my own backup i would backup shares on my nas i clearly knew should be backed up (i use hybrid backup sync that saves these jobs for share locations to be backedup). infrequently accessed or not, so these would never ever get ommited.

i would only on occassion re-review these infrequently accessed content whether they qualify to be retained or whether to delete them (while at it, i'll also use dupeguru to check if i mistakenly kept multiple copies of contents by mistake, which can easily happen, and thus save on space). but as long as they are in the share, they are still flagged for backup. thats my approach to it. this way you won't accidentally miss anything.
NAS
[Main Server] QNAP TS-877 (QTS) w. 4tb [ 3x HGST Deskstar NAS & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A (Truenas Core) w. 4x 2TB Samsung F3 (HD203WI) RaidZ1 ZFS + 8gb ddr3 Crucial
[^] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) 2x 4TB Seagate Ironwolf, Raid5
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-253D (Truenas Scale)
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Qotom Pfsense|100mbps FTTH | Win11, Ryzen 5600X Desktop (1x2tb Crucial P50 Plus M.2 SSD, 1x 8tb seagate Ironwolf,1x 4tb HGST Ultrastar 7K4000)


Resources
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin
ITConsultingBroll
First post
Posts: 1
Joined: Wed Mar 10, 2021 5:29 am

Re: Recover data from QNAP raid 5 with Ubuntu

Post by ITConsultingBroll »

Hi. Thats a great guide which helped me a lot so far. Thanks a lot for posting it.
Your instructions helped me to get access to the disks and read the raid information but I face a problem now.

When I do the vgscan and lvscan I receive error messages that the following error messages:

Code: Select all

sudo vgscan
  /dev/sdd: open failed: No medium found
  /dev/sde: open failed: No medium found
  /dev/sdf: open failed: No medium found
  /dev/sdg: open failed: No medium found
  Error reading device /dev/sda2 at 542769152 length 4.
  bcache_invalidate: block (10, 0) still held
  bcache_abort: block (10, 0) still held
  Error reading device /dev/sda2 at 542855168 length 4.
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  LV tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1

Code: Select all

udo lvscan
  /dev/sdd: open failed: No medium found
  /dev/sde: open failed: No medium found
  /dev/sdf: open failed: No medium found
  /dev/sdg: open failed: No medium found
  Error reading device /dev/sda2 at 542769152 length 4.
  bcache_invalidate: block (10, 0) still held
  bcache_abort: block (10, 0) still held
  Error reading device /dev/sda2 at 542855168 length 4.
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
  LV tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1
Trying to mount MD1 I receive that error>

Code: Select all

sudo mount /dev/md1 /mnt/raid/
mount: /mnt/raid: unknown filesystem type 'drbd'.

I would be very grateful if someone could help me with that error
Thank you very much, Martin
acmor
New here
Posts: 2
Joined: Sat Dec 26, 2009 7:02 pm

Re: Recover data from QNAP raid 5 with Ubuntu

Post by acmor »

Hi NASdaq, thousand thanks for your writing!

My TS-453A motherboard died a week ago :-( I have a backup of everything, but it is ~1 month old.

I tried all 4 HDDs (2x RAID1) on a PC running Manjaro, all were spinning up, I could see the partitions on it, but i couldn't mount it: unknown filesystem type 'drbd'

Then I found and followed your description and was able to mount my two RAID1s under Debian and rsynced the missing files (mostly photos of my 9 year old girl).

Once again: many thanks for sharing your wisdom!

UPDATE:
one thing to extend your tutorial: I had to activate my volume before I could mount it

Code: Select all

root@carpe-fractal:~# lvscan
   inactive          '/dev/vg289/lv545' [<74,54 GiB] inherit
   inactive          '/dev/vg289/lv2' [<7,20 TiB] inherit
root@carpe-fractal:~# lvchange -a y /dev/vg289/lv2
folaht
New here
Posts: 7
Joined: Sat Mar 13, 2021 6:08 am

Re: Recover data from QNAP raid 5 with Ubuntu

Post by folaht »

I would like to note that I'm having the same issue here as ITConsultingBroll:

Code: Select all

[folaht@Stohrje-uq ~]$ sudo mdadm --assemble --scan
mdadm: No arrays found in config file or automatically
[folaht@Stohrje-uq ~]$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   1,8T  0 disk
├─sda1        8:1    0 517,7M  0 part
│ └─md127     9:127  0 517,6M  0 raid1
├─sda2        8:2    0 517,7M  0 part
│ └─md123     9:123  0 517,7M  0 raid1
├─sda3        8:3    0   1,8T  0 part
│ └─md125     9:125  0   1,8T  0 raid1
├─sda4        8:4    0 517,7M  0 part
│ └─md124     9:124  0 448,1M  0 raid1
└─sda5        8:5    0     8G  0 part
  └─md126     9:126  0   6,9G  0 raid1
mmcblk0     179:0    0  29,8G  0 disk
├─mmcblk0p1 179:1    0 213,6M  0 part  /boot
└─mmcblk0p2 179:2    0  29,6G  0 part  /home
                                       /
zram0       253:0    0   2,7G  0 disk  [SWAP]
[folaht@Stohrje-uq ~]$ sudo cat /proc/mdstat
Personalities : [raid1]
md123 : active (auto-read-only) raid1 sda2[0]
      530112 blocks super 1.0 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md124 : active (auto-read-only) raid1 sda4[0]
      458880 blocks super 1.0 [64/1] [U_______________________________________________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active (auto-read-only) raid1 sda3[0]
      1943559616 blocks super 1.0 [1/1] [U]

md126 : active (auto-read-only) raid1 sda5[0]
      7235136 blocks super 1.0 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active (auto-read-only) raid1 sda1[0]
      530048 blocks super 1.0 [64/1] [U_______________________________________________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[folaht@Stohrje-uq ~]$ sudo fdisk -l /dev/md125
Disque /dev/md125 : 1,81 TiB, 1990205046784 octets, 3887119232 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 4096 octets
taille d'E/S (minimale / optimale) : 4096 octets / 4096 octets
[folaht@Stohrje-uq ~]$ sudo mkdir /mnt/raid
[folaht@Stohrje-uq ~]$ sudo mount /dev/md125 /mnt/raid
mount: /mnt/raid: type de système de fichiers « LVM2_member » inconnu.

Code: Select all

[folaht@Stohrje-uq ~]$ sudo vgscan
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: Unrecognised segment type flashcache
  WARNING: PV /dev/md125 in VG vg1 is using an old PV header, modify the VG to update.
  LV vg1/tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1
 

Code: Select all

[folaht@Stohrje-uq ~]$ sudo lvscan
  WARNING: Unrecognised segment type tier-thin-pool
  WARNING: Unrecognised segment type thick
  WARNING: Unrecognised segment type flashcache
  WARNING: PV /dev/md125 in VG vg1 is using an old PV header, modify the VG to update.
  LV vg1/tp1, segment 1 invalid: does not support flag ERROR_WHEN_FULL. for tier-thin-pool segment.
  Internal error: LV segments corrupted in tp1.
  Cannot process volume group vg1
[folaht@Stohrje-uq ~]$ sudo mount /dev/vg1/lv1 /mnt/raid
mount: /mnt/raid: le périphérique spécial /dev/vg1/lv1 n'existe pas.
Antjac
New here
Posts: 9
Joined: Sat Oct 23, 2021 9:03 pm

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Antjac »

Hi, bit late to the party, but do you think this may work with a JBOD array ?
Thanks...
User avatar
dolbyman
Guru
Posts: 34903
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Recover data from QNAP raid 5 with Ubuntu

Post by dolbyman »

spanning jbod would be lineear md mode instead of raid5 ...easy to adapt the guide (with the--assemble --scan option a 1:1 adoption anyways)

if you ever get your data back, start making backups
Antjac
New here
Posts: 9
Joined: Sat Oct 23, 2021 9:03 pm

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Antjac »

Hello,
Many thanks with your response.
I've been through loads of threads, none read as good as yours.
Would really appreciate some assistance with my JBOD array, if you can.
I have 4 drives, 4 storage pools, thin, 10TB, none have had anything written to them since my issue)
(which resulted in all my drives showing as empty when QNAP support logged in via putty / Teamviewer, they couldn't find / see the headers, if that's what they're called)
This all started when I tried to interrogate the first drive with a HDD docking station to see if I could recover my files and accidentally formatted it to GPT, so now it shows up as NTFS, empty drive. (this was after my issue first started, so not the original cause, but fear it is now)
Have tried Testdisk, which I think shows the original partitions, but am too scared to do anything else, for fear of making things worse.
Is there a way of converting this back to the original UBUNTU QNAP partition, so I can start to recover my files, which I know are still on there as I have seen evidence of their presence using some free recovery software, but need the whole array connected together for this to be truly effective.
I have a PC loaded with all the drives in the correct order; dual boot; windows 10 or UBUNTU and have created the boot disk on USB.
Just not sure what you mean by "spanning jbod would be lineear md mode instead of raid5 ...easy to adapt the guide (with the--assemble --scan option a 1:1 adoption anyways)
Anything you can advise would be great.....
User avatar
dolbyman
Guru
Posts: 34903
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Recover data from QNAP raid 5 with Ubuntu

Post by dolbyman »

Antjac wrote: Wed Dec 08, 2021 3:58 am Just not sure what you mean by "spanning jbod would be lineear md mode instead of raid5 ...easy to adapt the guide (with the--assemble --scan option a 1:1 adoption anyways)
try the guide, same option flags (--assemble --scan) as raid metadata should be found no matter what mode (linear,raid0,raid5, etc) the disks were in
Antjac
New here
Posts: 9
Joined: Sat Oct 23, 2021 9:03 pm

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Antjac »

Hi, just tried this now, got this:-
mdadm: No arrays found in config file or automatically
ubuntu@ubuntu:-$


My first disk is dev/sdc1 (Which is the one formatted as NTFS) - 1TB
Second - dev/sdb - 1TB
Third - dev/sda - 4TB
Fourth - dev/sde - 4TB
All of which are recognised in the Disks apt as UNKNOWN, except the first, which says NTFS-Not Mounted
Any ideas what I can try now please ?
User avatar
dolbyman
Guru
Posts: 34903
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Recover data from QNAP raid 5 with Ubuntu

Post by dolbyman »

so sdc is from where ? (can't be from a QNAP as internal disks cannot be formatted NTFS)
Antjac
New here
Posts: 9
Joined: Sat Oct 23, 2021 9:03 pm

Re: Recover data from QNAP raid 5 with Ubuntu

Post by Antjac »

Sorry just trying to log in on the other PC so I can send a shot of information from lsblk.
That drive is the one I accidentally formatted to GPT? when I tried to interrogate with desk top HDD dock, it did not come from QNAP, you are correct.
User avatar
dolbyman
Guru
Posts: 34903
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Recover data from QNAP raid 5 with Ubuntu

Post by dolbyman »

So What now

"The drive did NOT come from a QNAP"
or
"That drive is the one I accidentally formatted to GPT? when I tried to interrogate with desk top HDD dock"

So it's either a disk that has nothing to do with the NAS or a disk that was accidentally formatted and used to be in the NAS.
Locked

Return to “System & Disk Volume Management”