QNAP controlled filesystem degraded

Q'center app, Helpdesk app
Post Reply
ascl00
First post
Posts: 1
Joined: Sat May 31, 2014 10:30 am

QNAP controlled filesystem degraded

Post by ascl00 »

I have a QNAP 670 which has been restarting approximately once every 7-10 days for reasons I am yet to track down. However, I have noticed that some of the volumes are running in a degraded state:

Code: Select all

 # mdadm --detail /dev/md* | egrep -e '(State :|Array Size|md)'
/dev/md1:
     Array Size : 8760934848 (8355.08 GiB 8971.20 GB)
          State : active 
/dev/md13:
     Array Size : 458880 (448.20 MiB 469.89 MB)
          State : clean, degraded 
/dev/md256:
     Array Size : 530112 (517.77 MiB 542.83 MB)
          State : clean 
/dev/md322:
     Array Size : 7235136 (6.90 GiB 7.41 GB)
          State : clean 
/dev/md9:
     Array Size : 530112 (517.77 MiB 542.83 MB)
          State : clean, degraded 
Device info for the two interesting devices:

Code: Select all

        Version : 1.0
  Creation Time : Fri May 30 17:05:53 2014
     Raid Level : raid1
     Array Size : 458880 (448.20 MiB 469.89 MB)
  Used Dev Size : 458880 (448.20 MiB 469.89 MB)
   Raid Devices : 24
  Total Devices : 5
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Jan 26 13:08:30 2019
          State : clean, degraded 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

           Name : 13
           UUID : f07f15e0:e4d2c7e2:933e93a9:fda7025e
         Events : 73852

    Number   Major   Minor   RaidDevice State
       0       8       68        0      active sync   /dev/sde4
       1       8       52        1      active sync   /dev/sdd4
      26       8       36        2      active sync   /dev/sdc4
      25       8       20        3      active sync   /dev/sdb4
      24       8        4        4      active sync   /dev/sda4
      10       0        0       10      removed
      12       0        0       12      removed
      14       0        0       14      removed
      16       0        0       16      removed
      18       0        0       18      removed
      20       0        0       20      removed
      22       0        0       22      removed
      24       0        0       24      removed
      26       0        0       26      removed
      28       0        0       28      removed
      30       0        0       30      removed
      32       0        0       32      removed
      34       0        0       34      removed
      36       0        0       36      removed
      38       0        0       38      removed
      40       0        0       40      removed
      42       0        0       42      removed
      44       0        0       44      removed
      46       0        0       46      removed

Code: Select all

/dev/md9:
        Version : 1.0
  Creation Time : Fri May 30 17:05:50 2014
     Raid Level : raid1
     Array Size : 530112 (517.77 MiB 542.83 MB)
  Used Dev Size : 530112 (517.77 MiB 542.83 MB)
   Raid Devices : 24
  Total Devices : 5
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Jan 26 13:20:11 2019
          State : clean, degraded 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

           Name : 9
           UUID : 38dc6073:1fd6eae9:0c93cbee:e9ff0aa2
         Events : 1974999

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       1       8       49        1      active sync   /dev/sdd1
      26       8       33        2      active sync   /dev/sdc1
      25       8       17        3      active sync   /dev/sdb1
      24       8        1        4      active sync   /dev/sda1
      10       0        0       10      removed
      12       0        0       12      removed
      14       0        0       14      removed
      16       0        0       16      removed
      18       0        0       18      removed
      20       0        0       20      removed
      22       0        0       22      removed
      24       0        0       24      removed
      26       0        0       26      removed
      28       0        0       28      removed
      30       0        0       30      removed
      32       0        0       32      removed
      34       0        0       34      removed
      36       0        0       36      removed
      38       0        0       38      removed
      40       0        0       40      removed
      42       0        0       42      removed
      44       0        0       44      removed
      46       0        0       46      removed

Code: Select all

 # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md1 : active raid6 sde3[0] sda3[5] sdb3[6] sdc3[7] sdd3[1]
      8760934848 blocks super 1.0 level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md322 : active raid1 sda5[4](S) sdb5[3](S) sdc5[2](S) sdd5[1] sde5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sda2[4](S) sdb2[3](S) sdc2[2](S) sdd2[1] sde2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sde4[0] sda4[24] sdb4[25] sdc4[26] sdd4[1]
      458880 blocks super 1.0 [24/5] [UUUUU___________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sde1[0] sda1[24] sdb1[25] sdc1[26] sdd1[1]
      530112 blocks super 1.0 [24/5] [UUUUU___________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
Note that my data volume (md1) is fine (thankfully), and the remainder are not volumes I have actively created.

Code: Select all

# df -h | grep '/dev/md'
/dev/md9                493.5M    151.3M    342.2M  31% /mnt/HDA_ROOT
/dev/md13               417.0M    375.9M     41.1M  90% /mnt/ext
But this means they do not appear in the UI at all, I only see md1 in the UI. So.... what should I do?

EDIT: None of the disks appear to have errors reported in the UI, and, in fact, I cannot see any issues at all via the UI.


EDIT2: I am beginning to think this is just the way those volumes are configured... as it would mean (I think) that any disks added will automatically get a mirror'd copy of the OS. Meaning this is normal maybe?
User avatar
dolbyman
Guru
Posts: 35273
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: QNAP controlled filesystem degraded

Post by dolbyman »

bingo .. system partitions..they won't show up shares
Post Reply

Return to “NAS Management”