After Firmware Update Volume gone

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
alain.
New here
Posts: 2
Joined: Wed May 11, 2022 6:30 am

After Firmware Update Volume gone

Post by alain. »

Hi guys

After an Firmware Update on a TS-851 to 5.0.0.1986 my 16tb storage pool is still visible in web gui. However there is no volume. Shared folders thus point to nothing. no smb, no containers, no nothing.
A brief cardiac arrest later, my raid 6 thick array is mountable by hand if i also create mount point dir. this gives me back smb. smart is all green for all disks, fscheck is all good. qnap does not fstab as I'm used to, after reboot this has to be repeated.

You guys in this forum have solved all my issues in the past year, so i looked though all: "site:forum.qnap.com volume gone". No direct solutions...
best fit among many:
viewtopic.php?t=138492

Obviously I opened a ticket with qnap.

I did try to no avail:
sh init_lvm.sh
/etc/init.d] # ./mountall
adding to fstab
services.sh restart (after mounting by hand)

I was asked by qnap support to
update same firmware via qfinder
umount /dev/sdh5
e2fsck_64 -fp /dev/sdh5
e2fsck_64 -fp /dev/sdh6
Then I was asked to disable cache on the volumes - which does not work as I have no volumes. that suggestion did irritate me.
Then he explained my suggestion to pull in old config files from .@backup_config: uLinux.conf, smb.conf, qpkq.conf - which did nothing besides adding some errors in web gui.
Then he suggested that I buy a second qnap and some new hdds to copy my 16tb out, reset the thing and copy back ;)

So far zero progress. I'm aware of my options in terms of replicating this data back in. I would rater find out qnap volumes. where are the config files, who does the mounting, who creates volumes? Everything is there, I would rather just put the pointer back on and be done with it.

Firmware downgrading makes any sense?

Is there a paid qnap support option? I'm working on it with the qnap support for 10 days now.

Any suggestion appreciated.

PS:

qnap support guy didn't flinch at this, I'm a windows guy.
[/etc/init.d] # sh init_lvm.sh
Changing old config name...
mv: can't rename '/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 48)...
dev_count ++ = 1Detect disk(8, 16)...
dev_count ++ = 2Detect disk(8, 96)...
dev_count ++ = 3Detect disk(253, 0)...
ignore non-root enclosure disk(253, 0).
Detect disk(8, 64)...
dev_count ++ = 4Detect disk(8, 32)...
dev_count ++ = 5Detect disk(8, 0)...
dev_count ++ = 6Detect disk(8, 112)...
ignore non-root enclosure disk(8, 112).
Detect disk(8, 80)...
Detect disk(8, 48)...
Detect disk(8, 16)...
Detect disk(8, 96)...
Detect disk(253, 0)...
ignore non-root enclosure disk(253, 0).
Detect disk(8, 64)...
Detect disk(8, 32)...
Detect disk(8, 0)...
Detect disk(8, 112)...
ignore non-root enclosure disk(8, 112).
sys_startup_p2:got called count = -1
Done

Also I found a good system info script that i did run:

Code: Select all

in addition to output into txt, I got these two errors into stdout:
Error: /dev/sdf: unrecognised disk label
 HDIO_GET_IDENTITY failed: Invalid argument



*********************
** QNAP NAS Report **
*********************
 
NAS Model:      TS-851
Firmware:       5.0.0 Build 20211221
System Name:    zhs2
Workgroup:      NAS
Base Directory: /share/CACHEDEV1_DATA
NAS IP address: 192.168.1.39
 
Default Gateway Device: eth0
 
          inet addr:192.168.1.39  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2244 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2318 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1042593 (1018.1 KiB)  TX bytes:1402085 (1.3 MiB)
          Memory:d0700000-d077ffff 

 
DNS Nameserver(s):127.0.1.1
 
 
HDD Information:
 
HDD1 - Model=WDC WD60EFRX-68MYMN1                    , FwRev=82.00A82, SerialNo=     WD-WX21DC4D58FT
 
Model: WDC WD60EFRX-68MYMN1 (scsi)
Disk /dev/sda: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name     Flags
 1      20.5kB  543MB   543MB   ext3            primary
 2      543MB   1086MB  543MB   linux-swap(v1)  primary
 3      1086MB  5992GB  5991GB                  primary
 4      5992GB  5993GB  543MB   ext3            primary
 5      5993GB  6001GB  8554MB  linux-swap(v1)  primary

 
 
HDD2 - Model=WDC WD60EFRX-68MYMN1                    , FwRev=82.00A82, SerialNo=     WD-WX11D153XKPF
 
Model: WDC WD60EFRX-68MYMN1 (scsi)
Disk /dev/sdb: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name     Flags
 1      20.5kB  543MB   543MB   ext3            primary
 2      543MB   1086MB  543MB                   primary
 3      1086MB  5992GB  5991GB                  primary
 4      5992GB  5993GB  543MB   ext3            primary
 5      5993GB  6001GB  8554MB  linux-swap(v1)  primary

 
 
HDD3 - Model=WDC WD60EFRX-68MYMN1                    , FwRev=82.00A82, SerialNo=     WD-WX11D84JN3VJ
 
Model: WDC WD60EFRX-68MYMN1 (scsi)
Disk /dev/sdc: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name     Flags
 1      20.5kB  543MB   543MB   ext3            primary
 2      543MB   1086MB  543MB   linux-swap(v1)  primary
 3      1086MB  5992GB  5991GB                  primary
 4      5992GB  5993GB  543MB   ext3            primary
 5      5993GB  6001GB  8554MB  linux-swap(v1)  primary

 
 
HDD4 - Model=WDC WD60EFRX-68MYMN1                    , FwRev=82.00A82, SerialNo=     WD-WX31D15FYKH6
 
Model: WDC WD60EFRX-68MYMN1 (scsi)
Disk /dev/sdd: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name     Flags
 1      20.5kB  543MB   543MB   ext3            primary
 2      543MB   1086MB  543MB   linux-swap(v1)  primary
 3      1086MB  5992GB  5991GB                  primary
 4      5992GB  5993GB  543MB   ext3            primary
 5      5993GB  6001GB  8554MB  linux-swap(v1)  primary

 
 
HDD5 - Model=OCZ-VERTEX3                             , FwRev=2.15    , SerialNo=OCZ-2SRH402Z5936Z0B9
 
Model: ATA OCZ-VERTEX3 (scsi)
Disk /dev/sde: 120GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name     Flags
 1      20.5kB  543MB   543MB   ext3            primary
 2      543MB   1086MB  543MB                   primary
 3      1086MB  111GB   110GB                   primary
 4      111GB   111GB   543MB   ext3            primary
 5      111GB   120GB   8554MB  linux-swap(v1)  primary

 
 
HDD6 - Model=WDC WD60EFZX-68B3FN0                    , FwRev=81.00A81, SerialNo=         WD-CA0GZ6EK
 

                                                                          
Model: WDC WD60EFZX-68B3FN0 (scsi)
Disk /dev/sdf: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags: 
 
 
HDD7 - Model=WDC WD60EFAX-68JH4N1                    , FwRev=83.00A83, SerialNo=     WD-WX42D41H3LK5
 
Model: WDC WD60EFAX-68JH4N1 (scsi)
Disk /dev/sdg: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 1      20.5kB  543MB   543MB   ext3         primary
 2      543MB   1086MB  543MB                primary
 3      1086MB  5992GB  5991GB               primary
 4      5992GB  5993GB  543MB   ext3         primary
 5      5993GB  6001GB  8554MB               primary

 
Open device fail
 
HDD8 - 
Model:  USB DISK MODULE (scsi)
Disk /dev/sdh: 516MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type      File system  Flags
 1      4096B   5374kB  5370kB  primary   ext2
 2      5374kB  252MB   247MB   primary   ext2         boot
 3      252MB   498MB   247MB   primary   ext2
 4      498MB   516MB   17.4MB  extended
 5      498MB   507MB   8503kB  logical   ext2
 6      507MB   516MB   8897kB  logical   ext2

 
 
HDD9 -Drive absent
HDD10 -Drive absent
HDD11 -Drive absent
HDD12 -Drive absent
Volume Status
 
/dev/md1:
        Version : 1.0
  Creation Time : Wed Jul 22 16:58:29 2015
     Raid Level : raid6
     Array Size : 17551702848 (16738.61 GiB 17972.94 GB)
  Used Dev Size : 5850567616 (5579.54 GiB 5990.98 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Fri May 20 00:40:14 2022
          State : clean 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 1
           UUID : 2db0dbc2:2ae5fe5d:af739465:fa639ad2
         Events : 110435

    Number   Major   Minor   RaidDevice State
       0       8       51        0      active sync   /dev/sdd3
       1       8       35        1      active sync   /dev/sdc3
       2       8       19        2      active sync   /dev/sdb3
       4       8        3        3      active sync   /dev/sda3
       5       8       99        4      active sync   /dev/sdg3
 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md2 : active raid1 sde3[0]
      91175936 blocks super 1.0 [1/1] [U]
      
md1 : active raid6 sdd3[0] sdg3[5] sda3[4] sdb3[2] sdc3[1]
      17551702848 blocks super 1.0 level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md322 : active raid1 sdg5[4](S) sda5[3](S) sdb5[2](S) sdc5[1] sdd5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdg2[4](S) sda2[3](S) sdb2[2](S) sdc2[1] sdd2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md321 : active raid1 sde5[0]
      8283712 blocks super 1.0 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md13 : active raid1 sdd4[0] sdg4[28] sde4[27] sda4[26] sdb4[25] sdc4[24]
      458880 blocks super 1.0 [24/6] [UUUUUU__________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdd1[0] sdg1[28] sde1[27] sda1[26] sdb1[25] sdc1[24]
      530048 blocks super 1.0 [24/6] [UUUUUU__________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
 
Disk Space:
 
Filesystem                Size      Used Available Use% Mounted on
none                    400.0M    291.3M    108.7M  73% /
devtmpfs                  3.8G      4.0K      3.8G   0% /dev
tmpfs                    64.0M    312.0K     63.7M   0% /tmp
tmpfs                     3.8G    136.0K      3.8G   0% /dev/shm
tmpfs                    16.0M         0     16.0M   0% /share
/dev/sdh5                 7.8M     28.0K      7.8M   0% /mnt/boot_config
tmpfs                    16.0M         0     16.0M   0% /mnt/snapshot/export
/dev/md9                493.5M    140.3M    353.1M  28% /mnt/HDA_ROOT
cgroup_root               3.8G         0      3.8G   0% /sys/fs/cgroup
/dev/md13               417.0M    401.0M     16.0M  96% /mnt/ext
tmpfs                    32.0M     27.2M      4.8M  85% /samba_third_party
/dev/ram2               433.9M      2.3M    431.6M   1% /mnt/update
tmpfs                    64.0M      3.1M     60.9M   5% /samba
tmpfs                    48.0M     56.0K     47.9M   0% /samba/.samba/lock/msg.lock
tmpfs                    16.0M         0     16.0M   0% /mnt/ext/opt/samba/private/msg.sock
 
Mount Status:
 
none on /new_root type tmpfs (rw,mode=0755,size=409600k)
/proc on /proc type proc (rw)
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /share type tmpfs (rw,size=16M)
/dev/sdh5 on /mnt/boot_config type ext2 (rw)
tmpfs on /mnt/snapshot/export type tmpfs (rw,size=16M)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
cgroup_root on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/cgroup/memory type cgroup (rw,memory)
cpu on /sys/fs/cgroup/cpu type cgroup (rw,cpu)
/dev/md13 on /mnt/ext type ext4 (rw,data=ordered,barrier=1,nodelalloc)
tmpfs on /samba_third_party type tmpfs (rw,size=32M)
/dev/ram2 on /mnt/update type ext2 (rw)
tmpfs on /samba type tmpfs (rw,size=64M)
tmpfs on /samba/.samba/lock/msg.lock type tmpfs (rw,size=48M)
tmpfs on /mnt/ext/opt/samba/private/msg.sock type tmpfs (rw,size=16M)
 
 
Memory Information:
 
             total       used       free     shared    buffers     cached
Mem:       8043688    1657264    6386424     333600      22912     841420
Swap:     16048948          0   16048948
 
NASReport completed on 2022-05-20 00:49:55 (/tmp/nasreport) 
alain.
New here
Posts: 2
Joined: Wed May 11, 2022 6:30 am

[SOLVED] Re: After Firmware Update Volume gone

Post by alain. »

Shame on me. Despite installing an x.0.0. firmware I didn't read the release notes, even after trouble.
There is a breaking change in caching from version 4 to 5.

Brought to my attention by qnap support (after 11 days in support ping pong asking if downgrading firmware would be senseful):
https://www.qnap.com/de-de/how-to/faq/a ... what-to-do (that's the opposite of my problem).

Therefore disabling cache in web gui and rebooting (I needed 2 reboots, I have read removing the ssd would be needed, I didn't do that. But I rebooted via web gui, that has failed me sometimes) **.

After disabling cache do init lvm again: sh /etc/init.d/init_lvm.sh
and you will see new lines at the end of the output - your volumes. That makes container station working again, no mounting by hand, etc, glory glory halleluja ;)

Side note: when I wanted to reactivate cache after: deactivate cache, reboot, init_lvm, it was already active.

PS: ** Oh, no, my bad. Support told me to deactivate cache, reboot and check if it's fixed. Second reboot is not needed, but init_lvm was needed in my case.
Post Reply

Return to “System & Disk Volume Management”