Volume wont mount after firmware recovery

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
dpgator33
New here
Posts: 3
Joined: Tue Aug 07, 2018 10:14 pm

Volume wont mount after firmware recovery

Post by dpgator33 »

Let me preface this by saying I'm not a Linux "expert" - I know a bit but it's not a primary skill...
A couple days ago, one of our QNAPs got a little sluggish - long story short I ultimately had to manually power down the device and bring it back up. Oh, also, I'm not on site. I work remotely (have since before COVID) and just have a couple of help desk guys on site. So when I need a button pushed or something that I can't do remotely I enlist these guys to help out.
Moving on...the QNAP wouldn't boot up after the manual reboot. I do havd IPMI access to the console, and I could see the kernel wasn't booting up. I ended up doing a firmware recovery.

During the fw recovery, it's required to remove all the drives. So I asked the help desk guys to do that, making it absolutely crystal clear that those drives absolutely had to be labeled and kept in order so that after the update they would be identifiable and to be returned back to the slot they came from. I'm starting to think that didn't happen.

I am logged back in, but the RAID6 volume is failing to mount. It consists of 16 10TB drives, FWIW.

The reason why I think the drives are out of order is because of the results of these commands. Note at the very bottom under the "number" column - everything is in order except the last three 15,14,13 seems like it should be 13,14,15. The other two outliers (16 and 17) are drives that are replacements for failed drives, and they are in order. For example where 17 is is where 8 used to be, and that is the last drive that was replaced.

Code: Select all

[~] # mdadm --detail /dev/md1
/dev/md1:
        Version : 1.0
  Creation Time : Fri Mar 16 12:36:42 2018
     Raid Level : raid6
     Array Size : 136590763008 (130263.11 GiB 139868.94 GB)
  Used Dev Size : 9756483072 (9304.51 GiB 9990.64 GB)
   Raid Devices : 16
  Total Devices : 16
    Persistence : Superblock is persistent

    Update Time : Fri Dec  3 09:38:36 2021
          State : clean
 Active Devices : 16
Working Devices : 16
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : 1
           UUID : ea7157b0:9ccbb4bf:0fbe52b0:7c1e0e9d
         Events : 1837816

    Number   Major   Minor   RaidDevice State
       0      65       83        0      active sync   /dev/sdv3
       1       8      243        1      active sync   /dev/sdp3
       2       8      131        2      active sync   /dev/sdi3
       3       8       51        3      active sync   /dev/sdd3
      16      65       67        4      active sync   /dev/sdu3
       5      65       35        5      active sync   /dev/sds3
       6       8      211        6      active sync   /dev/sdn3
       7       8       35        7      active sync   /dev/sdc3
      17      65       51        8      active sync   /dev/sdt3
       9      65       19        9      active sync   /dev/sdr3
      10       8      147       10      active sync   /dev/sdj3
      11       8       99       11      active sync   /dev/sdg3
      12      65        3       12      active sync   /dev/sdq3
      15       8       83       13      active sync   /dev/sdf3
      14       8      115       14      active sync   /dev/sdh3
      13       8      227       15      active sync   /dev/sdo3
Similarly here - the numbers are kind of reversed but they seem to indicate these drives are out of order - for md1 sdo3, sdh3 and sdf3 I think are reversed of what they should be. Am I correct in thinking that? Should I try just putting them in the "correct" order and seeing what happens?

Code: Select all

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid1 sdb3[0] sde3[1]
      185406080 blocks super 1.0 [2/2] [UU]

md1 : active raid6 sdv3[0][b] sdo3[13] sdh3[14] sdf3[15][/b] sdq3[12] sdg3[11] sdj3[10] sdr3[9] sdt3[17] sdc3[7] sdn3[6] sds3[5] sdu3[16] sdd3[3] sdi3[2] sdp3[1]
      136590763008 blocks super 1.0 level 6, 512k chunk, algorithm 2 [16/16] [UUUUUUUUUUUUUUUU]
JWIL79
Starting out
Posts: 17
Joined: Fri Oct 22, 2010 2:32 am

Re: Volume wont mount after firmware recovery

Post by JWIL79 »

Let me start by saying that I dont know much about Linux either, and less about the issue you have.
Reason I saw your post is because i have a sorta similar issue.

If placing the drives in "correct" order does not work, then "mdadm --assemble --scan" might help.
I found some post on another site which goes into moving raid to a new PC/ recovering a raid. and with your raid still being on the original machine. It should work for you.
Links to what I found below.
https://askubuntu.com/questions/944564/ ... r-computer
this also link to the following post
https://serverfault.com/questions/32709 ... ew-machine
Post Reply

Return to “System & Disk Volume Management”