Recovered RAID1 drives from dead TS453 min: Cannot mount
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
Would still like to manually recover the files.. I hooked the drives up to a PC running Linux, installed mdadm and lvm2, and...
1-mdadm has assembled my RAID1 array, and I can see md123-md127 (see below for lsblk output)
2-lvm manager shows my lvm volume on dev/md125
[sudo] password for steve:
WARNING: Unrecognised segment type thick
WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
PV VG Fmt Attr PSize PFree
/dev/md1 vg1 lvm2 a-- <3.63t 0
*** Are these warnings anything to be concerned about?
3-I seem to have three volumes: lvdisplay shows 3 volumes (See below for full output):Three volums are:
lv544 : available under /dev/vg1/lv544
tp1 : not available: WHY???
lv1 : not available: WHY???
4- I tried to mount lv544, but Failed????
sudo mount /dev/vg1/lv544 ./mt1
mount: /home/steve/mt1: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-lv544, missing codepage or helper program, or other error.
-Are the warnings in the lvm manager any concern?
-Why are the 2nd two LVM volumes "not available"
-Why can't I mount the "available" LVM volume? (
Any help greatly appreaciated
{output of lvdisplay}
steve@XPS-8300-Desktop:~$ sudo lvdisplay
WARNING: Unrecognised segment type thick
WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
--- Logical volume ---
LV Path /dev/vg1/lv544
LV Name lv544
VG Name vg1
LV UUID Mr8Ykd-3VeA-I9DI-S17l-S2ew-bj6u-wlnvK2
LV Write Access read/write
LV Creation host, time NASEA7857, 2015-12-18 22:55:40 -0500
LV Status available
# open 0
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors 8192
Block device 253:0
--- Logical volume ---
LV Name tp1
VG Name vg1
LV UUID 2AVhgl-GzUQ-J0RU-KuJL-wt00-SroE-mkKZOY
LV Write Access read/write
LV Creation host, time NASEA7857, 2015-12-18 22:55:49 -0500
LV Pool metadata tp1_tmeta
LV Pool data tp1_tdata
LV Status NOT available
LV Size 3.59 TiB
Current LE 942215
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/vg1/lv1
LV Name lv1
VG Name vg1
LV UUID BoKAYK-71zH-w3zk-dpoh-JiiS-VfvG-up8d1V
LV Write Access read/write
LV Creation host, time NASEA7857, 2015-12-18 22:55:50 -0500
LV Status NOT available
LV Size 3.59 TiB
Current LE 941959
Segments 1
Allocation inherit
Read ahead sectors 8192
{LSBLK output}
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 4K 1 loop /snap/bare/5
loop1 7:1 0 61.9M 1 loop /snap/core20/1405
loop2 7:2 0 133.2M 1 loop /snap/chromium/2020
loop3 7:3 0 55.5M 1 loop /snap/core18/2409
loop4 7:4 0 155.6M 1 loop /snap/firefox/1232
loop5 7:5 0 162.3M 1 loop /snap/firefox/1443
loop6 7:6 0 220.5M 1 loop /snap/code/99
loop7 7:7 0 254.1M 1 loop /snap/gnome-3-38-2004/106
loop8 7:8 0 113.9M 1 loop /snap/core/13308
loop9 7:9 0 248.8M 1 loop /snap/gnome-3-38-2004/99
loop10 7:10 0 61.9M 1 loop /snap/core20/1518
loop11 7:11 0 81.3M 1 loop /snap/gtk-common-themes/1534
loop12 7:12 0 176.9M 1 loop /snap/krita/64
loop13 7:13 0 260.7M 1 loop /snap/kde-frameworks-5-core18/32
loop14 7:14 0 45.9M 1 loop /snap/snap-store/575
loop15 7:15 0 43.6M 1 loop /snap/snapd/15177
loop16 7:16 0 47M 1 loop /snap/snapd/16010
loop17 7:17 0 284K 1 loop /snap/snapd-desktop-integration/10
loop18 7:18 0 284K 1 loop /snap/snapd-desktop-integration/14
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/efi
└─sda3 8:3 0 931G 0 part /
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sdb2 8:18 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1
├─sdb3 8:19 0 3.6T 0 part
│ └─md1 9:1 0 3.6T 0 raid1
│ └─vg1-lv544 253:0 0 20G 0 lvm
├─sdb4 8:20 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1
└─sdb5 8:21 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
sdd 8:48 0 3.6T 0 disk
├─sdd1 8:49 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sdd2 8:50 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1
├─sdd3 8:51 0 3.6T 0 part
│ └─md1 9:1 0 3.6T 0 raid1
│ └─vg1-lv544 253:0 0 20G 0 lvm
├─sdd4 8:52 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1
└─sdd5 8:53 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1
sde 8:64 1 0B 0 disk
sdf 8:80 1 0B 0 disk
sdg 8:96 1 0B 0 disk
sdh 8:112 1 0B 0 disk
1-mdadm has assembled my RAID1 array, and I can see md123-md127 (see below for lsblk output)
2-lvm manager shows my lvm volume on dev/md125
[sudo] password for steve:
WARNING: Unrecognised segment type thick
WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
PV VG Fmt Attr PSize PFree
/dev/md1 vg1 lvm2 a-- <3.63t 0
*** Are these warnings anything to be concerned about?
3-I seem to have three volumes: lvdisplay shows 3 volumes (See below for full output):Three volums are:
lv544 : available under /dev/vg1/lv544
tp1 : not available: WHY???
lv1 : not available: WHY???
4- I tried to mount lv544, but Failed????
sudo mount /dev/vg1/lv544 ./mt1
mount: /home/steve/mt1: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-lv544, missing codepage or helper program, or other error.
-Are the warnings in the lvm manager any concern?
-Why are the 2nd two LVM volumes "not available"
-Why can't I mount the "available" LVM volume? (
Any help greatly appreaciated
{output of lvdisplay}
steve@XPS-8300-Desktop:~$ sudo lvdisplay
WARNING: Unrecognised segment type thick
WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
--- Logical volume ---
LV Path /dev/vg1/lv544
LV Name lv544
VG Name vg1
LV UUID Mr8Ykd-3VeA-I9DI-S17l-S2ew-bj6u-wlnvK2
LV Write Access read/write
LV Creation host, time NASEA7857, 2015-12-18 22:55:40 -0500
LV Status available
# open 0
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors 8192
Block device 253:0
--- Logical volume ---
LV Name tp1
VG Name vg1
LV UUID 2AVhgl-GzUQ-J0RU-KuJL-wt00-SroE-mkKZOY
LV Write Access read/write
LV Creation host, time NASEA7857, 2015-12-18 22:55:49 -0500
LV Pool metadata tp1_tmeta
LV Pool data tp1_tdata
LV Status NOT available
LV Size 3.59 TiB
Current LE 942215
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/vg1/lv1
LV Name lv1
VG Name vg1
LV UUID BoKAYK-71zH-w3zk-dpoh-JiiS-VfvG-up8d1V
LV Write Access read/write
LV Creation host, time NASEA7857, 2015-12-18 22:55:50 -0500
LV Status NOT available
LV Size 3.59 TiB
Current LE 941959
Segments 1
Allocation inherit
Read ahead sectors 8192
{LSBLK output}
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 4K 1 loop /snap/bare/5
loop1 7:1 0 61.9M 1 loop /snap/core20/1405
loop2 7:2 0 133.2M 1 loop /snap/chromium/2020
loop3 7:3 0 55.5M 1 loop /snap/core18/2409
loop4 7:4 0 155.6M 1 loop /snap/firefox/1232
loop5 7:5 0 162.3M 1 loop /snap/firefox/1443
loop6 7:6 0 220.5M 1 loop /snap/code/99
loop7 7:7 0 254.1M 1 loop /snap/gnome-3-38-2004/106
loop8 7:8 0 113.9M 1 loop /snap/core/13308
loop9 7:9 0 248.8M 1 loop /snap/gnome-3-38-2004/99
loop10 7:10 0 61.9M 1 loop /snap/core20/1518
loop11 7:11 0 81.3M 1 loop /snap/gtk-common-themes/1534
loop12 7:12 0 176.9M 1 loop /snap/krita/64
loop13 7:13 0 260.7M 1 loop /snap/kde-frameworks-5-core18/32
loop14 7:14 0 45.9M 1 loop /snap/snap-store/575
loop15 7:15 0 43.6M 1 loop /snap/snapd/15177
loop16 7:16 0 47M 1 loop /snap/snapd/16010
loop17 7:17 0 284K 1 loop /snap/snapd-desktop-integration/10
loop18 7:18 0 284K 1 loop /snap/snapd-desktop-integration/14
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/efi
└─sda3 8:3 0 931G 0 part /
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sdb2 8:18 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1
├─sdb3 8:19 0 3.6T 0 part
│ └─md1 9:1 0 3.6T 0 raid1
│ └─vg1-lv544 253:0 0 20G 0 lvm
├─sdb4 8:20 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1
└─sdb5 8:21 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part
sdd 8:48 0 3.6T 0 disk
├─sdd1 8:49 0 517.7M 0 part
│ └─md9 9:9 0 517.6M 0 raid1
├─sdd2 8:50 0 517.7M 0 part
│ └─md256 9:256 0 517.7M 0 raid1
├─sdd3 8:51 0 3.6T 0 part
│ └─md1 9:1 0 3.6T 0 raid1
│ └─vg1-lv544 253:0 0 20G 0 lvm
├─sdd4 8:52 0 517.7M 0 part
│ └─md13 9:13 0 448.1M 0 raid1
└─sdd5 8:53 0 8G 0 part
└─md322 9:322 0 6.9G 0 raid1
sde 8:64 1 0B 0 disk
sdf 8:80 1 0B 0 disk
sdg 8:96 1 0B 0 disk
sdh 8:112 1 0B 0 disk
- dolbyman
- Guru
- Posts: 35231
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
there should be a couple raids on each drive
md1 = Data
md9 = Configuration, update
md13 = Web interface engine and other OS extension
md256 = Swap
md321 = Swap
by choosing the recovery without a new QNAP device, you chose the most difficult path, search for the complicated ways to recover your data (plenty of forum threads here)
md1 = Data
md9 = Configuration, update
md13 = Web interface engine and other OS extension
md256 = Swap
md321 = Swap
by choosing the recovery without a new QNAP device, you chose the most difficult path, search for the complicated ways to recover your data (plenty of forum threads here)
-
- Experience counts
- Posts: 2043
- Joined: Thu Mar 03, 2016 1:11 am
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
Right... I'm by no means an IT expert, but it seems what QNAP has done here is using an LVM with some proprietary "special sauce" making it impossible for regular users like myself to recover their data. I'm sure their motivation here is "lock in" and making money, but this scheme makes their equipment LESS reliable than an ad-hoc network made from an old computer using the stock filesystems without this LVM trash, and this practice should be illlegal IMO.
That, in addition to using junk processors which randomly break down.
so hopefully you could understand my hesitency to give this evil, company another cent of my money, but in the end I'll probably do that.
That, in addition to using junk processors which randomly break down.
so hopefully you could understand my hesitency to give this evil, company another cent of my money, but in the end I'll probably do that.
Last edited by sgordon777 on Thu Jun 23, 2022 5:32 am, edited 1 time in total.
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
So, against my better jusdgement, I forked out $600 and bough a new one, which was suppodadly "migratable" with the TS453 mini...
booted it up, then tells me the volume is corrupted, has a red "Error" status, so guess I'm Fcked and my data is lost. The drives are themselves in perfect shape, not a single error in SMART tests.
Sorry, but using the POS fragile "LVM" which are apparently super-easy to corrupt is just idiotic and relying on them stupid @#$#$^%$#$@#$@#$
booted it up, then tells me the volume is corrupted, has a red "Error" status, so guess I'm Fcked and my data is lost. The drives are themselves in perfect shape, not a single error in SMART tests.
Sorry, but using the POS fragile "LVM" which are apparently super-easy to corrupt is just idiotic and relying on them stupid @#$#$^%$#$@#$@#$
- dolbyman
- Guru
- Posts: 35231
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
What unit did you buy ?
Swearing around will not help at all
Swearing around will not help at all
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
TS-453D-4G
I tried both drives together and each separateliy, same results (drives detect and show up as healthy, but volume/Pool both show "Error" in red
I tried both drives together and each separateliy, same results (drives detect and show up as healthy, but volume/Pool both show "Error" in red
- dolbyman
- Guru
- Posts: 35231
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
Ok, I think I may have made a big mistake in doing the migration... Here's what happened:
1-I bought the new NAS + 2 new 4TB drives. I have two 4TB drives from the old system.
2-At first, I plugged only the two old drives in slots 1&2. I'm pretty sure I saw the old system come up, but I didn't back up then... (in hindsight, I should have ,kicking myself now)
3-Next, I took the two old 4TB drives out and put the two new 4TB drives in slots 1&2, and proceeded to format the two new drives with the new system, gave the system a new network name.
4-Next, I inserted the two old drives in slots 3&4(keeping the new drives in slots 1&2, hoping to do an in-system copy... Nothing happened, so I shut down the system. **This is where I think things went terribly wrong**
5-Next, I took all the drives out and put the old drives back into slots 1&2, hopeing I could be at step#2. But I find that now the new "system" (new network name and GUI, OS) has got put onto the old drives!!!!!!!, probabaly in step#4, which is probably why the lvm isnt mounting
So I think in step#2, I should have backed up the data from one of my networked PCs
Am I screwed? Is there any way to recover the LVM from the old drives when the new system got copied to them?
1-I bought the new NAS + 2 new 4TB drives. I have two 4TB drives from the old system.
2-At first, I plugged only the two old drives in slots 1&2. I'm pretty sure I saw the old system come up, but I didn't back up then... (in hindsight, I should have ,kicking myself now)
3-Next, I took the two old 4TB drives out and put the two new 4TB drives in slots 1&2, and proceeded to format the two new drives with the new system, gave the system a new network name.
4-Next, I inserted the two old drives in slots 3&4(keeping the new drives in slots 1&2, hoping to do an in-system copy... Nothing happened, so I shut down the system. **This is where I think things went terribly wrong**
5-Next, I took all the drives out and put the old drives back into slots 1&2, hopeing I could be at step#2. But I find that now the new "system" (new network name and GUI, OS) has got put onto the old drives!!!!!!!, probabaly in step#4, which is probably why the lvm isnt mounting
So I think in step#2, I should have backed up the data from one of my networked PCs
Am I screwed? Is there any way to recover the LVM from the old drives when the new system got copied to them?
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
Here's the info you requested above:
[steve_admin@NAS7771 ~]$
[steve_admin@NAS7771 ~]$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md322 : active raid1 sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sda4[129] sdb4[128]
458880 blocks super 1.0 [128/2] [U_U_____________________________________________________________________________________________________________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sda1[129] sdb1[128]
530048 blocks super 1.0 [128/2] [U_U_____________________________________________________________________________________________________________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
[steve_admin@NAS7771 ~]$ sudo md_checker
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
Password:
Welcome to MD superblock checker (v2.0) - have a nice day~
Scanning system...
RAID metadata found!
UUID: 0f780a2a:b06428f4:2b5d0700:def9bbce
Level: raid1
Devices: 2
Name: md1
Chunk Size: -
md Version: 1.0
Creation Time: Dec 18 16:55:39 2015
Status: OFFLINE
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sda3 0 Active Jun 24 01:45:34 2022 6993195 AA
NAS_HOST 2 /dev/sdb3 1 Active Jun 24 01:45:34 2022 6993195 AA
===============================================================================================
[steve_admin@NAS7771 ~]$
[steve_admin@NAS7771 ~]$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md322 : active raid1 sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sda4[129] sdb4[128]
458880 blocks super 1.0 [128/2] [U_U_____________________________________________________________________________________________________________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sda1[129] sdb1[128]
530048 blocks super 1.0 [128/2] [U_U_____________________________________________________________________________________________________________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
[steve_admin@NAS7771 ~]$ sudo md_checker
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
Password:
Welcome to MD superblock checker (v2.0) - have a nice day~
Scanning system...
RAID metadata found!
UUID: 0f780a2a:b06428f4:2b5d0700:def9bbce
Level: raid1
Devices: 2
Name: md1
Chunk Size: -
md Version: 1.0
Creation Time: Dec 18 16:55:39 2015
Status: OFFLINE
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sda3 0 Active Jun 24 01:45:34 2022 6993195 AA
NAS_HOST 2 /dev/sdb3 1 Active Jun 24 01:45:34 2022 6993195 AA
===============================================================================================
- dolbyman
- Guru
- Posts: 35231
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
try
did you upgrade the NAS diskless to the latest firmware first?(before inserting the drives)
Code: Select all
mdadm --assemble --scan
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
I've now tried this configuration
slot1: new 4TB 1 -new_pool
slot2: new 4TB 2 -new_pool
slot3: old 4TB 1 -drive detected, want to re-assemble old pool
slot4: old 4TB 2 -drive detected, want to re-assemble old pool
This configuration detects the new pool and volumes on the new drives in slots 1&2, it also detectes the old drives in 3&4, but isn't re-assembling the old pool... **Is there a way I can tell the system (prefarrably through the GUI) to re-assemble the pool on drives in slots 3&4?
slot1: new 4TB 1 -new_pool
slot2: new 4TB 2 -new_pool
slot3: old 4TB 1 -drive detected, want to re-assemble old pool
slot4: old 4TB 2 -drive detected, want to re-assemble old pool
This configuration detects the new pool and volumes on the new drives in slots 1&2, it also detectes the old drives in 3&4, but isn't re-assembling the old pool... **Is there a way I can tell the system (prefarrably through the GUI) to re-assemble the pool on drives in slots 3&4?
- dolbyman
- Guru
- Posts: 35231
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
you cannot do a migration with new drives..only with the old drives
best to open a ticket now
best to open a ticket now
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
The computer gods have given me sone mercy finally.... Found a backup I'd forgotten about from 2021
-
- Starting out
- Posts: 21
- Joined: Mon Dec 21, 2015 2:25 pm
Re: Recovered RAID1 drives from dead TS453 min: Cannot mount
Thanks to everybody who helped, esp. dolbyman, I appreciate your patience. I don't do this IT stuff much but I've learned quite a bit from the ordeal, mostly to make regular backups