Hello,
considering the price range, device kind and target customers of QNAP, mostly home customers, I am not sure it could compare to enterprise datacenters with multiple sites, SAN, backup systems, 20G networks and so on...
Most consumers will put their data in the NAS expecting that they survive problems. I am quite sure that 99% of the customer base did not realize that they should buy a second NAS to backup the first to ensure that they will never loose data.
Unfortunately, the QNAP team made design choices that create complexity where it is not required. Complexity is the worst enemy of security and reliability.
I own a QNAP NAS (TVS 472-XT) with 4x6To data, only two drives are in RAID as others are backup/archives. A drive can fail, the system/backplane can fail but in 40 years of computer job I never faced a system were system AND drives all failed simultaneously. This has happen in one company I worked for 30 years ago when a water circuit failure create a "data lake" in the data center and all servers where under water but if such scenario happen for a consumer, then NAS recovery would not be the biggest issue they have to deal with.
In normal life, you loose 1 drive or your server and you are usually interested in recovering whatever is on surviving drives. In case of disaster, anything that can be done to recover data must not only be facilitated but even properly documented because when you need it, you will have no time to do research nor the mindset to do experiments to try to save things.
This is why such a procedure SHOULD be properly documented by QNAP to maximise data recovery in emergency conditions.
This is also, why, despite my NAS working without troubles, I did some experience to prepare myself for such situation.
I extracted a drive from my NAS using the Storage service and the option "Safely remove drive".
In fact, safely means that the NAS will remove ALL shared folders definition, ANY service activated using the drive (like NFS) and even CRASH (not shutdown) any KVM Virtual machine that you have that can be using the drive as NFS client.
It is a one way trip, once you will have remove the drive, you will
NEVER be able to replug it and recover the content from your NAS.
As the drive are mounted in your NAS using a heavy stack of software 1) Soft MD Raid 2) LVM 3) DRBD even if you just request that the disk be used standalone with no bells and whistles, mounting it elsewhere will require some steps. Don't think too much about cache management with such pile of software or you will start to make nightmare about the reliability of any database working directly on the drives as there are chances that buffered I/O will make your database engine consider the transaction committed despite the journal not yet really been written on disk.
If you mount the removed drive on a regular Linux system, I used an Ubuntu 14.04, you can retrieve the content with the following procedure :
- you will need mdadm (raid tools) and lvm2 (logical volume management), if you don't have them :
- plug the drive in, locate the device created
Code: Select all
$ dmesg | tail
...
[5443747.331855] sdb: sdb1 sdb2 sdb3 sdb4 sdb5
[5443747.332777] sd 1:0:0:0: [sdb] Attached SCSI disk
...
You should find lines like the ones above that show that a new drive /dev/sdb has been registered
- the device usually have 5 partitions, the one that is important to you is partition 3 -> /dev/sdb3
- Check that the raid metadata is accessible
Code: Select all
$ mdadm -E /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : aba5df7b:d6d628c0:551ab9e9:e7315cbc
Name : 4
Creation Time : Sat Jan 5 12:00:13 2019
Raid Level : raid1
Raid Devices : 1
Avail Dev Size : 5840623240 (2785.03 GiB 2990.40 GB)
Array Size : 2920311616 (2785.03 GiB 2990.40 GB)
Used Dev Size : 5840623232 (2785.03 GiB 2990.40 GB)
Super Offset : 5840623504 sectors
State : clean
Device UUID : edea2d79:79cef3ba:8d5454c8:d1a9013e
Update Time : Mon Jan 21 15:27:39 2019
Checksum : 92a0b9ec - correct
Events : 4
Device Role : Active device 0
Array State : A ('A' == active, '.' == missing)
- if the metadata was found, mount the raid array with the volume as single on an unused array, here /dev/md100
Code: Select all
$ mdadm -A -R /dev/md100 /dev/sdb3
mdadm: /dev/md100 has been started with 1 drive.
- Scan the drives for logical volumes : pvscan This should display a supported volume on your recently create array (/dev/md100)
Code: Select all
$ pvscan
PV /dev/md100 VG vg290 lvm2 [2,72 TiB / 0 free]
Total: 1 [2,72 TiB] / in use: 1 [2,72 TiB] / in no VG: 0 [0 ]
- Check volume group and available volumes
Code: Select all
$ vgdisplay
--- Volume group ---
VG Name vg290
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 15
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2,72 TiB
PE Size 4,00 MiB
Total PE 712966
Alloc PE / Size 712966 / 2,72 TiB
Free PE / Size 0 / 0
VG UUID ntiXEb-nXWZ-2hY7-cQLX-yHDg-hqRi-mXHWAz
$ lvdisplay
--- Logical volume ---
LV Path /dev/vg290/lv547
LV Name lv547
VG Name vg290
LV UUID P0PakB-I7XU-OYzG-DQYw-nQNE-YJNO-oXYKuc
LV Write Access read/write
LV Creation host, time NASBox, 2019-01-05 12:00:16 +0100
LV Status available
# open 0
LV Size 27,90 GiB
Current LE 7142
Segments 2
Allocation inherit
Read ahead sectors 8192
Block device 252:0
--- Logical volume ---
LV Path /dev/vg290/lv6
LV Name lv6
VG Name vg290
LV UUID r8FDzu-7qwf-eSFi-Vq1c-e6dm-8Vqf-UGviWj
LV Write Access read/write
LV Creation host, time NASBox, 2019-01-05 12:00:21 +0100
LV Status available
# open 0
LV Size 2,69 TiB
Current LE 705824
Segments 1
Allocation inherit
Read ahead sectors 8192
Block device 252:1
- You are almost done, you just now have to mount the logical volume (here /dev/vg290/lv6 is the candidate due to the size)
Code: Select all
$ mount /mnt/anywhere /dev/vg290/lv6
$ ls /mnt/anywhere
You should retrieve, under /mnt/anywhere the data that were previously located on the QNAP physical drives in one or more shared folders.
Hope this may help anyone who is looking desperately on a way to recover data from a failed NAS. Any feedback would be appreciated particularly if you are facing units for which this procedure is not working.
I should add that all these operations can not be done as an external volume on a QNAP Nas. You will have to find a regular Linux system to do the job (you can just boot on a Linux USB Key and plug your drive using a dock station in a laptop).
And finally, as such procedure is not documented by QNAP and heavily rely on design choices of QNAP that can change any moment between firmware upgrades, I have no warranty that what was working TODAY will work the day I will need it. Not a consumer friendly attitude.