Please help with Pool Error

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
ozdarchangel
New here
Posts: 5
Joined: Tue May 24, 2011 10:10 am

Please help with Pool Error

Post by ozdarchangel »

Hi guys,

I currently have a "POOL ERROR" message on the LCD of my TVS-671 after an unplanned power outage (some idiot pulled the cable). I can't access the GUI (via browser or the Qmanager app), nor can I access the shares I had set up.

I followed the commands pretty closely from a similar post (from a while ago - viewtopic.php?t=142179), though I don't fully understand all of the commands, the results are below. Can anyone please assist further? Running md_checker after init_lvm shows a status of online (instead of offline at the beginning), but I still can't access the GUI or any of the shares. Thanks in advance for any help you can provide.

Initial result from md_checker:
Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID: e0b51acd:d885f26e:541c9aee:8e3a2bcb
Level: raid6
Devices: 6
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Dec 22 03:48:22 2015
Status: OFFLINE
================================================================================ ===============
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
================================================================================ ===============
NAS_HOST 1 /dev/sda3 0 Active Jan 24 07:33:51 2022 5246 AAAAAA
NAS_HOST 2 /dev/sdb3 1 Active Jan 24 07:33:51 2022 5246 AAAAAA
NAS_HOST 3 /dev/sdc3 2 Active Jan 24 07:33:51 2022 5246 AAAAAA
NAS_HOST 4 /dev/sdd3 3 Active Jan 24 07:33:51 2022 5246 AAAAAA
NAS_HOST 5 /dev/sde3 4 Active Jan 24 07:33:51 2022 5246 AAAAAA
NAS_HOST 6 /dev/sdf3 5 Active Jan 24 07:33:51 2022 5246 AAAAAA
================================================================================ ===============

Result from qcli_storage:
Enclosure Port Sys_Name Size Type RAID RAID_Type Pool TMeta VolType VolName
NAS_HOST 1 /dev/sda 2.73 TB free -- -- -- -- -- --
NAS_HOST 2 /dev/sdb 3.64 TB free -- -- -- -- -- --
NAS_HOST 3 /dev/sdc 3.64 TB free -- -- -- -- -- --
NAS_HOST 4 /dev/sdd 2.73 TB free -- -- -- -- -- --
NAS_HOST 5 /dev/sde 2.73 TB free -- -- -- -- -- --
NAS_HOST 6 /dev/sdf 2.73 TB free -- -- -- -- -- --
md13 mount failed!

Result from qcli_storage -d:
Enclosure Port Sys_Name Type Size Alias Signature Partitions Model
NAS_HOST 1 /dev/sda HDD:free 2.73 TB -- QNAP FLEX 5 WDC WD30EFRX-68EUZN0
NAS_HOST 2 /dev/sdb HDD:free 3.64 TB -- QNAP FLEX 5 WDC WD40EFRX-68N32N0
NAS_HOST 3 /dev/sdc HDD:free 3.64 TB -- QNAP FLEX 5 WDC WD40EFRX-68N32N0
NAS_HOST 4 /dev/sdd HDD:free 2.73 TB -- QNAP FLEX 5 WDC WD30EFRX-68EUZN0
NAS_HOST 5 /dev/sde HDD:free 2.73 TB -- QNAP FLEX 5 WDC WD30EFRX-68EUZN0
NAS_HOST 6 /dev/sdf HDD:free 2.73 TB -- QNAP FLEX 5 WDC WD30EFRX-68EUZN0

Result from cat /proc/mdstat:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md322 : active raid1 sdf5[5](S) sde5[4](S) sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdf2[5](S) sde2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[0] sdf4[5] sde4[4] sdd4[3] sdc4[25] sdb4[24]
458880 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[0] sdf1[5] sde1[4] sdd1[3] sdc1[25] sdb1[24]
530048 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

Result from uname -a:
Linux NASF225E0 4.14.24-qnap #1 SMP Fri Dec 6 16:28:01 CST 2019 x86_64 GNU/Linux

My output looked practically identical to that in the thread, so I ran /etc/init.d/init_lvm/sh:
Changing old config name...
mv: can't rename '/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 48)...
dev_count ++ = 1Detect disk(8, 16)...
dev_count ++ = 2Detect disk(8, 96)...
ignore non-root enclosure disk(8, 96).
Detect disk(8, 64)...
dev_count ++ = 3Detect disk(8, 32)...
dev_count ++ = 4Detect disk(8, 0)...
dev_count ++ = 5Detect disk(8, 80)...
Detect disk(8, 48)...
Detect disk(8, 16)...
Detect disk(8, 96)...
ignore non-root enclosure disk(8, 96).
Detect disk(8, 64)...
Detect disk(8, 32)...
Detect disk(8, 0)...
sys_startup_p2:got called count = -1
LV Status NOT available
Done

Then re-ran md_checker:
Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID: e0b51acd:d885f26e:541c9aee:8e3a2bcb
Level: raid6
Devices: 6
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Dec 22 03:48:22 2015
Status: ONLINE (md1) [UUUUUU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sda3 0 Active Jan 24 14:52:47 2022 5246 AAAAAA
NAS_HOST 2 /dev/sdb3 1 Active Jan 24 14:52:47 2022 5246 AAAAAA
NAS_HOST 3 /dev/sdc3 2 Active Jan 24 14:52:47 2022 5246 AAAAAA
NAS_HOST 4 /dev/sdd3 3 Active Jan 24 14:52:47 2022 5246 AAAAAA
NAS_HOST 5 /dev/sde3 4 Active Jan 24 14:52:47 2022 5246 AAAAAA
NAS_HOST 6 /dev/sdf3 5 Active Jan 24 14:52:47 2022 5246 AAAAAA

Re-ran cat /proc/mdstat:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid6 sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[7] sdb3[6]
11681246464 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md322 : active raid1 sdf5[5](S) sde5[4](S) sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdf2[5](S) sde2[4](S) sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[0] sdf4[5] sde4[4] sdd4[3] sdc4[25] sdb4[24]
458880 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[0] sdf1[5] sde1[4] sdd1[3] sdc1[25] sdb1[24]
530048 blocks super 1.0 [24/6] [UUUUUU__________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
ozdarchangel
New here
Posts: 5
Joined: Tue May 24, 2011 10:10 am

Re: Please help with Pool Error

Post by ozdarchangel »

FWIW, after I got in touch with QNAP support, they confirmed the volume was mounted, told me to use WinSCP to copy all my data off the NAS, and then upgrade the firmware.

No guarantee the firmware upgrade will return me to a fully operational state, but at least I should be able to access the GUI again and run diagnostics.

In about 5 days… WinSCP is slow as anything, even over a 1Gbps LAN, it copies at around 1MB/s for large files, and I have about another TB to go. Data’s intact at least; that’s the main thing.
Post Reply

Return to “System & Disk Volume Management”