Mount QNAP Drives to Linux

Questions about SNMP, Power, System, Logs, disk, & RAID.
User avatar
dolbyman
Guru
Posts: 20497
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Mount QNAP Drives to Linux

Post by dolbyman » Thu Jan 31, 2019 4:31 am

according to google, amazon glacier saves and restores the data in default hierarchy buckets .. you just need a client that can hadle those

e,g,
http://s3browser.com/

martijnatlico
New here
Posts: 6
Joined: Thu May 26, 2016 4:17 pm

Re: Mount QNAP Drives to Linux

Post by martijnatlico » Tue Feb 05, 2019 2:47 am

Thanks, I found a client that handles the QNAP Glacier format and that sort of worked, though performance is horrible and the restore is taking forever so I'm not going to recommend it here.

As far as restoring the data of a modern QNAP raid array like you could with older NASes; that's just not possible at this time. Here's three observations from trying to access the RAID10 array out of my TS-453A:

1. The md raid is accessible just fine and two system partitions are accessible.
2. For the data volumes, QNAP uses a drbd fork (rqdrbd) to provide the RTRR functionality (even on a standalone NAS), which is incompatible with the default Linux setup. See http://lists.linbit.com/pipermail/drbd- ... 24106.html for an example. This is where I got stuck.
3. Even if you get past the drbd issue, chances are you'll run into the next problem: the drbd volume contains an LVM PV containing a volume group that can contain volumes using a proprietary QNAP extension, namely that of thick volumes. See viewtopic.php?t=93862 for an example

The LVM fork is contained in the GPL sources and one user reported success in building that, but qdrbd is not so you're out of luck there.

Just leaving this here for any hopeful googlers. If anyone figures out the qdrbd thing I'd love to hear about it!

jronpaul
New here
Posts: 5
Joined: Tue Apr 23, 2019 9:52 am

Re: Mount QNAP Drives to Linux

Post by jronpaul » Thu Apr 25, 2019 11:05 am

some people cant afford to get another type of disk array or setup to backup the qnap.
i used qnap for my backups.. and I backup from the qnap to another simple DNS23 NAS. but that nas doesnt have the same space so I can only
back up really important data.
that said, my 3 year old QNAP died and im in the same boat.. Im trying to mount the raid1 drives and it seems it has a proprietary config..
meanwhile, my old dlink DNS323 no frills NAS is still alive and i can easily mount the drives from that in a linux server.
i wont be buying qnap either as their answer was the same.. soo sorry, buy another qnap... ya no thanks.

cyclofosfamide
New here
Posts: 2
Joined: Tue Oct 27, 2015 4:07 am

Re: Mount QNAP Drives to Linux

Post by cyclofosfamide » Thu Jun 06, 2019 3:02 am

philippelt wrote:
Tue Jan 22, 2019 12:42 am
Hello,

considering the price range, device kind and target customers of QNAP, mostly home customers, I am not sure it could compare to enterprise datacenters with multiple sites, SAN, backup systems, 20G networks and so on.........
Thank you, THANK YOU, so much! You helped me get my data back. Worst case scenario was almost happening: lost ALL homevideo's (kids and stuff) from 2003 till today. Lost photo's 2017 april up to today (yes, I did backup those last time in 2017!).

What happened: my TS-251-4G had an hardware error. SATA port 2 was down. No big deal, I used RAID1. The TS-251 was @ latest beta-firmware.
Got new TS-251+, put in both drives, did the firmware update as requested (but not the newest beta!). *ouch* disks not recognized, so no question about 'restore raid1/disks'.
Then I put the disks 1 by 1 back in old TS-251-4G and installed the same firmware as the new TS-251+. Guess what.... nothing happened.. disks all not reachable.

So, installed Debian to some other PC and retrieved ALL data, thanks to your guide phillipelt. :D

User avatar
dolbyman
Guru
Posts: 20497
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Mount QNAP Drives to Linux

Post by dolbyman » Thu Jun 06, 2019 3:07 am

bookmarked the guide to maybe help other people in the future

but still .. get in the habit of making regular backups .. if the NAS killed your drives, no amount of shell trickery will get your data back

cyclofosfamide
New here
Posts: 2
Joined: Tue Oct 27, 2015 4:07 am

Re: Mount QNAP Drives to Linux

Post by cyclofosfamide » Thu Jun 06, 2019 3:18 am

Yup dolbyman, so true. Don't care about all downloads, movies, apps, etc. But the personal stuff, it's worth the backup. Gonna buy new external disk and schedule the backups instead of manually backup infrequent.

Niemand_01
First post
Posts: 1
Joined: Wed Jun 19, 2019 2:51 am

Re: Mount QNAP Drives to Linux

Post by Niemand_01 » Wed Jun 19, 2019 3:12 am

I also had the problem that my NAS died with a hardware failure.

What worked for me was using the program r-explorer: https://www.r-explorer.com/#ourproducts

This program manages to read through the full QNAP software stack mdadm -> drbd -> lvm -> ext4 in my case

Since I did not want to pay the licence fee, I used the following workaround. The Program displays the sector offset of the device and the sector size use this to do a mount command:

Code: Select all

sudo mount -o offset=109710872576 /dev/sda /mnt/
Where you need to change the offset to the value you computed and the device to the device of your hdd.

A different option to get the correct offset is the program 'testdisk'. But this would take quite long so I could not test this, yet.

mickwood
First post
Posts: 1
Joined: Tue Jul 09, 2019 8:17 pm

Re: Mount QNAP Drives to Linux

Post by mickwood » Tue Jul 09, 2019 11:51 pm

Good to know I'm not alone trying to mount a QNAP drive ..

I have a TS-251+ with 2 disks configured for RAID1, the hardware has failed and I want to mount one of the drives using a ubuntu device. I have followed philippelt's procedure and all looks good until I run pvscan when I get an error message "No matching physical volume found".
I get the same result if I use Niemand_01's method, incidentally I can see the data using r-explorer but I don't like the idea of paying for software to get to my data that I should have access to.

Any help would be most welcome.

qianfulong
New here
Posts: 2
Joined: Sat Nov 09, 2019 5:48 pm

Re: Mount QNAP Drives to Linux

Post by qianfulong » Sun Nov 10, 2019 8:03 am

@Niemand_01: Would you be so kind as to explain how exactly you used R-Explorer (we are talking about the "Recovery Explorer RAID" version of it right?) to get to yur data? Did you manage to have R-Explorer reassemble your RAID so that you actually could access the original file saystem and get access to ALL your data or did you use the deep scanning part of R-Explorer? The latter one allowed me to recover some of my data but by far not all of it. Also e.g. m2s videos could be recovered but when saved back to another HDD they were not readable by any video player. So i'd be interested in whether you really had access to all of your original data or just some of it. If you managed to reassebmle your RAID i would be grateful if you could let me know how you achieved that. Thx in advance!

eveares
Starting out
Posts: 23
Joined: Fri Apr 22, 2016 10:46 am

Re: Mount QNAP Drives to Linux

Post by eveares » Sun Dec 08, 2019 2:08 pm

I am curious, how would you recover and assemble a RAID 6 QNAP array in Ubuntu?

Undesirable
Starting out
Posts: 14
Joined: Sat Jan 14, 2017 12:37 pm

Re: Mount QNAP Drives to Linux

Post by Undesirable » Sun Dec 08, 2019 5:24 pm

philippelt wrote:
Tue Jan 22, 2019 12:42 am
If you mount the removed drive on a regular Linux system, I used an Ubuntu 14.04, you can retrieve the content with the following procedure :
  • You should retrieve, under /mnt/anywhere the data that were previously located on the QNAP physical drives in one or more shared folders.
Hi, thanks very much for your guide on attempting to retrieve data from Raid-1 on Qnap.
I almost reached the end of the procedure, but then failed with.

Code: Select all

root@odroid:~# sudo mount /mnt/old_hdd /dev/vg1/lv1
mount: /dev/vg1/lv1: mount point does not exist.
I hoped to solve it by rebooting, after which I attempted to re-assemble the array using:

Code: Select all

root@odroid:~# sudo mdadm -A -R /dev/md100 /dev/sda3
mdadm: Fail create md100 when using /sys/module/md_mod/parameters/new_array
mdadm: /dev/md100 has been started with 1 drive (out of 2).
That error message is always displayed both before and after the reboot; I couldn't find a solution but md100 is created regardless. Anyway, prior to the reboot I could get past this point, but now pvscan and vgdisplay don't detect anything.

Here are the results of mdadm examining the partition with the data required to be restored. This still works after a reboot:

Code: Select all

root@odroid:~# mdadm -E /dev/sda3
/dev/sda3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 657638bf:8d6e777e:6b909172:6ae270ca
           Name : 1
  Creation Time : Fri Jan 13 18:54:29 2017
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 11701135240 (5579.54 GiB 5990.98 GB)
     Array Size : 5850567616 (5579.54 GiB 5990.98 GB)
  Used Dev Size : 11701135232 (5579.54 GiB 5990.98 GB)
   Super Offset : 11701135504 sectors
   Unused Space : before=0 sectors, after=240 sectors
          State : clean
    Device UUID : 3b2a0a8a:ecb0044d:aa4c3856:6f9db46c

Internal Bitmap : -32 sectors from superblock
    Update Time : Fri Sep  6 21:24:42 2019
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : a021d627 - correct
         Events : 239


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
Here is the result of pvscan prior to the reboot; md100 isn't displayed after the reboot:

Code: Select all

root@odroid:~# pvscan
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 0: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4128768: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4186112: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4096: Input/output error
  PV /dev/md100   VG vg1             lvm2 [<5.45 TiB / 0    free]
  Total: 1 [<5.45 TiB] / in use: 1 [<5.45 TiB] / in no VG: 0 [0   ]
Now:

Code: Select all

root@odroid:~# pvscan
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 0: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4128768: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4186112: Input/output error
  /dev/mmcblk0rpmb: read failed after 0 of 4096 at 4096: Input/output error
  No matching physical volumes found
And lvdisplay prior to the reboot; this does nothing at all now either:

Code: Select all

root@odroid:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1/lv544
  LV Name                lv544
  VG Name                vg1
  LV UUID                qchzxH-uBtZ-4XyR-TTjo-na3D-2NO5-BnVD25
  LV Write Access        read/write
  LV Creation host, time NAS0ADD56, 2017-01-13 18:54:33 +0000
  LV Status              NOT available
  LV Size                55.79 GiB
  Current LE             14283
  Segments               1
  Allocation             inherit
  Read ahead sectors     8192

  --- Logical volume ---
  LV Path                /dev/vg1/lv1
  LV Name                lv1
  VG Name                vg1
  LV UUID                uXJnYO-Kvp8-I9ZW-wctW-1HV3-soIW-odGOiY
  LV Write Access        read/write
  LV Creation host, time NAS0ADD56, 2017-01-13 18:54:40 +0000
  LV Status              NOT available
  LV Size                5.39 TiB
  Current LE             1414077
  Segments               1
  Allocation             inherit
  Read ahead sectors     8192
Now:

Code: Select all

root@odroid:~# lvdisplay
root@odroid:~#
Here's a screenshot of disk management in Linux MATE showing the assembled partition as md100 (I can still get this far after a reboot, but no pvscan or lvdisplay results):
Disks_Raid1_Recovery_Attempt.png
To recap, I got as far as attempting to mount the drive before a reboot, but it wouldn't mount because it said the mount point doesn't exist, even though I created the directory in /mnt/.

After the reboot I can get as far as assembling the sd3 as md100, but now I can't pvscan or lvdisplay. Mdadm --examine sda3 still works, however.
You do not have the required permissions to view the files attached to this post.

S.Haran
Getting the hang of things
Posts: 59
Joined: Sun Dec 16, 2018 12:17 am
Contact:

Re: Mount QNAP Drives to Linux

Post by S.Haran » Mon Dec 09, 2019 11:45 pm

You were close. But it seems your /dev/md100 did not assemble after the reboot. What is the output from...

Code: Select all

cat /proc/mdstat
On-Line Data Recovery Consultant. RAID / NAS / Linux Specialist.
Serving clients worldwide since 2011. Complex cases welcome.
https://FreeDataRecovery.us

Undesirable
Starting out
Posts: 14
Joined: Sat Jan 14, 2017 12:37 pm

Re: Mount QNAP Drives to Linux

Post by Undesirable » Tue Dec 10, 2019 10:08 am

S.Haran wrote:
Mon Dec 09, 2019 11:45 pm
You were close. But it seems your /dev/md100 did not assemble after the reboot. What is the output from...

Code: Select all

cat /proc/mdstat
Hi, thanks for looking into it. The output is below. pvscan and vgdisplay are still not functioning on it. Not sure why they did when I first installed lvm2.

Code: Select all

root@odroid:~# cat /proc/mdstat
Personalities : [raid1]
md100 : active (auto-read-only) raid1 sda3[1]
      5850567616 blocks super 1.0 [2/1] [_U]
      bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>
Also some dmesg tail here:

Code: Select all

root@odroid:~# dmesg | tail
[  612.484221] fb: osd[0] enable: 1 (Xorg)
[  612.488339] fb: osd[0] enable: 0 (Xorg)
[  612.505102] fb: osd[0] enable: 0 (Xorg)
[  615.409049] md: md100 stopped.
[  615.411148] md: bind<sda3>
[  615.422836] md: raid1 personality registered for level 1
[  615.431205] md/raid1:md100: active with 1 out of 2 mirrors
[  615.431445] created bitmap (44 pages) for device md100
[  615.434459] md100: bitmap initialized from disk: read 3 pages, set 0 of 89273 bits
[  615.468744] md100: detected capacity change from 0 to 5990981238784

S.Haran
Getting the hang of things
Posts: 59
Joined: Sun Dec 16, 2018 12:17 am
Contact:

Re: Mount QNAP Drives to Linux

Post by S.Haran » Tue Dec 10, 2019 11:36 am

So md100 exists but is not a LV. You can confirm with...

Code: Select all

file -s /dev/md100
But since this is a RAID1 there are other options to get at the data. If you run testdisk from cgsecurity.org it should find the data partition and let you access and copy the data. The data partition should be in a Linux ext3 filesystem format. So look for that in the testdisk scan results.
On-Line Data Recovery Consultant. RAID / NAS / Linux Specialist.
Serving clients worldwide since 2011. Complex cases welcome.
https://FreeDataRecovery.us

Undesirable
Starting out
Posts: 14
Joined: Sat Jan 14, 2017 12:37 pm

Re: Mount QNAP Drives to Linux

Post by Undesirable » Tue Dec 10, 2019 11:51 am

S.Haran wrote:
Tue Dec 10, 2019 11:36 am
So md100 exists but is not a LV. You can confirm with...

Code: Select all

file -s /dev/md100
But since this is a RAID1 there are other options to get at the data. If you run testdisk from cgsecurity.org it should find the data partition and let you access and copy the data. The data partition should be in a Linux ext3 filesystem format. So look for that in the testdisk scan results.
Oh, I thought they must've been because there were volumes displayed with pvscan and lvdisplay the first time I tried them. All the instructions on page 2 were working up until the final step of mounting. Then I rebooted in the hope it would fix the mounting issue and pvscan & lvdisplay stopped showing anything.

Currently, the physical disk partition's showing up as a Linux Raid Member when I examine it with a standard disk management tool, and when I assemble it with mdadm, it shows as a "drbd" partition. Anyway, the confirmation output is:

Code: Select all

root@odroid:~# file -s /dev/md100
/dev/md100: LVM2 PV (Linux Logical Volume Manager), UUID: fwMlxr-BIs0-0rkF-gOtR-4s06-aGmf-ntfxw8, size: 5990979629056

Post Reply

Return to “System & Disk Volume Management”