Raid 1: Volume/storage pool is unmounted / shows error

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
charly_k
New here
Posts: 4
Joined: Sun Aug 26, 2012 6:37 pm
Location: Southern Germany

Raid 1: Volume/storage pool is unmounted / shows error

Post by charly_k »

I have a TS-228 and recently upgraded to Firmware 4.3.6.1831
Yesterday it had some problem indicated by red light flashing, so I shut it down manually. Only to find that after powering on again, it had been reset to factory settings.
I could restore the settings from a backup file (interestingly the latest backup did not work, but the one before...)
But the data were not accessible. I use a RAID 1 with 2 disks. The Storage Manager displayed "Legacy Volume: unmounted" and "Static Volume DataVol1: Error"

I have almost no Linux knowledge. But after reading through some of the posts I connected via SSH to the NAS and tried

[~] # /etc/init.d/init_lvm.sh
Changing old config name...
mv: unable to rename `/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 0)...
Detect disk(8, 16)...
sys_startup_p2:got called count = -1
Done


After that I could see that in the Storage Manager all shares on the 'DataVol1' were now visible. But it still had the status "Ready (Checking Filesystem); Alarm: Deactivated" and checking the file system produced the error "Failed to check file system. Volume: DataVol1, Volume could not be unmounted".

So I rebooted and was back at the initial status "Legacy Volume: unmounted" and "Static Volume DataVol1: Error".

My next try was along the lines of viewtopic.php?p=224731, and that was probably a mistake. After

[~] # swapoff /dev/md1
swapoff: /dev/md1: Invalid argument
[~] # swapoff -a
[~] # mdadm -S /dev/md1
mdadm: stopped /dev/md1
[~] # mkswap /dev/sda2
Setting up swapspace version 1, size = 542859 kB
no label, UUID=c4d7f6ea-99c2-4a96-af7d-3e0d240a318b
[~] # mkswap /dev/sdb2
Setting up swapspace version 1, size = 542859 kB
no label, UUID=07cd3493-3bbf-4e31-abcc-16d5704c9826
[~] # swapon /dev/sda2
swapon: /dev/sda2: Invalid argument
[~] # swapon -a
[~] # e2fsck -f /dev/md0
e2fsck 1.42.13 (17-May-2015)
e2fsck: No such file or directory while trying to open /dev/md0
Possibly non-existent device?
[~] # e2fsck -f /dev/md1
e2fsck 1.42.13 (17-May-2015)
e2fsck: No such file or directory while trying to open /dev/md1
Possibly non-existent device?
[~] # mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
[~] # mdadm --detail /dev/md1
mdadm: cannot open /dev/md1: No such file or directory


(note: /dev/sda3 and /dev/sdb3 are used in my Raid 1, but here I used /dev/sda2 and /dev/sdb2, since it said so in the post) and another

[~] # /etc/init.d/init_lvm.sh
Changing old config name...
mv: unable to rename `/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 0)...
Detect disk(8, 16)...
sys_startup_p2:got called count = -1
Done


the Raid 1 is still existent:

[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID: 9dab847e:5472491d:865f367b:2e9e2a87
Level: raid1
Devices: 2
Name: md1
Chunk Size: -
md Version: 1.0
Creation Time: Dec 5 20:41:19 2020
Status: ONLINE (md1) [UU]
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
1 /dev/sda3 0 Active Nov 20 15:37:40 2021 30292 AA
2 /dev/sdb3 1 Active Nov 20 15:37:40 2021 30292 AA


but now "DataVol1" has disappeared in the storage manager, only the "Legacy Volume: unmounted" is visible.

Moreover, it seems the partition table of md1 is invalid. I collected the following information:

[~] # mdadm --detail /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Sat Dec 5 20:41:19 2020
Raid Level : raid1
Array Size : 1943559616 (1853.52 GiB 1990.21 GB)
Used Dev Size : 1943559616 (1853.52 GiB 1990.21 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Sat Nov 20 11:43:48 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : 1
UUID : 9dab847e:5472491d:865f367b:2e9e2a87
Events : 30292

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
2 8 19 1 active sync /dev/sdb3

[~] # e2fsck -f /dev/md1
e2fsck 1.42.13 (17-May-2015)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/md1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>

[~] # mount /dev/md1 /share/MD1_DATA/ -t ext4
mount: wrong fs type, bad option, bad superblock on /dev/md1,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

[~] # mke2fs -n /dev/md1
mke2fs 1.42.13 (17-May-2015)
/dev/md1 contains a lvm2pv file system
Proceed anyway? (y,n) y
Creating filesystem with 485889904 4k blocks and 30369792 inodes
Filesystem UUID: fa2f945b-2302-43c7-b79d-cd7214aae0ea
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848


Relevant output from dmesg, which shows the swapon/swapoff action is:

[ 1161.820204] md1: detected capacity change from 1990205046784 to 0
[ 1161.820225] md: md1 stopped.
[ 1161.820240] md: unbind<sda3>
[ 1161.860565] md: export_rdev(sda3)
[ 1161.860605] md: unbind<sdb3>
[ 1161.900471] md: export_rdev(sdb3)
[ 2172.537280] [mux_irq_handle] irq(72) status is not change. clear it! (st:0x00 000008 en:0x0cb8dc38)
[ 2175.540150] [mux_irq_handle] irq(72) status is not change. clear it! (st:0x00 000008 en:0x0cb8dc38)
[ 2181.153116] md: md1 stopped.
[ 2181.171418] md: bind<sdb3>
[ 2181.171823] md: bind<sda3>
[ 2181.174700] md/raid1:md1: active with 2 out of 2 mirrors
[ 2181.174788] md1: detected capacity change from 0 to 1990205046784
[ 2181.184858] md1: unknown partition table


I tried e2fsck with option -b and some of the superblock backup locations given above, but always got the same result of "superblock invalid..." like above.

Is there anyone who can help me to recover this DataVolume?
Last edited by charly_k on Mon Nov 22, 2021 12:37 am, edited 2 times in total.
charly_k
New here
Posts: 4
Joined: Sun Aug 26, 2012 6:37 pm
Location: Southern Germany

Re: [SOLVED] Raid 1: Volume/storage pool is unmounted / shows error

Post by charly_k »

After a reboot I was back in the original error state of the system: "Legacy Volume: unmounted" and "Static Volume DataVol1: Error"

Here is how I solved the problem:

With Putty I connected to the NAS via SSH, and entered

Code: Select all

/etc/init.d/init_lvm.sh
This resulted in the output

[~] # /etc/init.d/init_lvm.sh
Changing old config name...
mv: unable to rename `/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 0)...
Detect disk(8, 16)...
sys_startup_p2:got called count = -1
Done

So same as the day before. Again, this resulted in the status of "Ready" of my "DataVol1" in the StorageManager. I started "Check Filesystem" in the Storage Manager. Took some while, but this time it finished successfully at 100%. After that all share folders were accessible in the File Manager and I could draw a new backup of the data.

The extra "Legacy Volume: unloaded/unmounted" still existed, however, and after a restart of the NAS the original error state was back.

So I did not see any other way than to choose "Initialize NAS" in the restore menu, with deleting all of data and volumes (and after reboot it also formatted the 2 disks!), create a new volume "DataVol1" with all the previous share folders and re-load all data from the backup. Possibly there is some way of keeping the original data when initializing - and formatting - with only one disk (at least one disk is requested by the NAS) and later inserting the disk with the existing data, but I was tired trying...

So my resume: The execution of /etc/init.d/init_lvm.sh helped me to get back into a state to save the data. Plus you have to be stubborn to let the filesystem be checked, even if it does not succeed the first time.
tiziano.sartori
First post
Posts: 1
Joined: Tue Jan 18, 2022 6:14 am

Re: Raid 1: Volume/storage pool is unmounted / shows error

Post by tiziano.sartori »

Thank you charly_k, I bring back my data with "/etc/init.d/init_lvm.sh" and then I reset to factory defaults my QNAP!
SNoof
First post
Posts: 1
Joined: Mon Apr 04, 2022 4:59 am

Re: Raid 1: Volume/storage pool is unmounted / shows error

Post by SNoof »

I had to register to say : THANK YOU SO MUCH charly_k !


You saved my datas !
Post Reply

Return to “System & Disk Volume Management”