No rebuild after migration of degraded RAID5

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
User avatar
oyvindo
Experience counts
Posts: 1010
Joined: Tue May 19, 2009 2:08 am
Location: Norway, Oslo

No rebuild after migration of degraded RAID5

Post by oyvindo » Fri Aug 30, 2019 4:02 pm

Hi All,

My 4-bay TS-453Mini NAS one day had a motherboard HW fail that resulted in loss of power to HDD drive 4. As a result, my RAID5 was degraded. Hotswapping disk4 with a new disk obviously did not help as there was no power in slot 4.
I decided to buy a new QNAP NAS TS-453Be and migrate the remaining 3 disks over and then rebuild the RAID5 in the new NAS. I opened a ticket with QNAP and they confirmed that a degraded RAID could indeed be migrated and then rebuilt.

After successfully migrating the remaining 3 disks to the new NAS, the NAS booted properly and the RAID5 came up fully working (read/write) but off course still in degraded mode.
Then, hotplugging a fourth disk did not result in the expected automatic rebuild.

I have now been online with remote support from QNAP for almost a week, and still - no progress. We just seem to be going in circles.
Here are some details :

Code: Select all

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sda3[1] sdb3[4] sdc3[2]
      8760933888 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]

md322 : active raid1 sdb5[2](S) sda5[1] sdc5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdb2[2](S) sda2[1] sdc2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdc4[0] sdb4[32] sda4[1]
      458880 blocks super 1.0 [32/3] [UUU_____________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdc1[0] sdb1[32] sda1[1]
      530048 blocks super 1.0 [32/3] [UUU_____________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
And here:

Code: Select all

[~] # mdadm --detail /dev/md1
/dev/md1:
        Version : 1.0
  Creation Time : Mon Jun 19 19:08:05 2017
     Raid Level : raid5
     Array Size : 8760933888 (8355.08 GiB 8971.20 GB)
  Used Dev Size : 2920311296 (2785.03 GiB 2990.40 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Fri Aug 30 09:54:28 2019
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : 1
           UUID : bbd27436:4c8d36ed:f62bc19a:6239b280
         Events : 311341

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8        3        1      active sync   /dev/sda3
       2       8       35        2      active sync   /dev/sdc3
       4       8       19        3      active sync   /dev/sdb3
[~] #
Can anyone of you knowledgeable guys give me some advice on how to fix this?
Why isn't the RAID rebuild process starting? What does it take to make it start? How?
(P.S. I do have a complete backup of all my data, but a lot of NAS configurations, tuning and set-up parameters has to be re-done, and I'd rather not have to do all that again - if at all possible)

Rgds
Viking
NAS:
QNAP TS-453Be 16Gb
4x3TB RAID5
QTS 4.3.6
Madsonic 6.1 + LMS 7.9.2
Plex 1.03

QNAP HS-251 2G
2x2TB RAID0
QTS 4.3.6
Kodi 16.1
Rainloop 1.12

QNAP TS-119
Single Disk 1Tb
QTS 4.3.3

User avatar
OneCD
Ask me anything
Posts: 6404
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: No rebuild after migration of degraded RAID5

Post by OneCD » Sun Sep 01, 2019 5:33 am

@Viking, did you solve this? :'

production NAS: TS-569 Pro with Debian 9.9 'Stretch' (power on/off times are < 1 minute)
backup NAS: TS-559 Pro+ with QTS 4.2.6 #20190921

one.cd.only@gmail.com

Image Image Image Image

User avatar
oyvindo
Experience counts
Posts: 1010
Joined: Tue May 19, 2009 2:08 am
Location: Norway, Oslo

Re: No rebuild after migration of degraded RAID5

Post by oyvindo » Sun Sep 01, 2019 6:08 am

@OneCD, Hi mate - good to hear from you.
And no, it hasn't been solved. The guy from QNAP finally were able to force a rebuild, and he told me to update the firmware and run a File Scan after the rebuild was finished.
Eventually, the rebuild did finish without any error message, but after firmware update and automatic reboot, the RAID is still degraded. :-(
NAS:
QNAP TS-453Be 16Gb
4x3TB RAID5
QTS 4.3.6
Madsonic 6.1 + LMS 7.9.2
Plex 1.03

QNAP HS-251 2G
2x2TB RAID0
QTS 4.3.6
Kodi 16.1
Rainloop 1.12

QNAP TS-119
Single Disk 1Tb
QTS 4.3.3

User avatar
OneCD
Ask me anything
Posts: 6404
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: No rebuild after migration of degraded RAID5

Post by OneCD » Sun Sep 01, 2019 6:17 am

So, you have a RAID 5 that’s still degraded after a rebuild?

I’m amazed at QNAP’s ability to take something developed by others over many years, and is known to work well - then break it. :(

production NAS: TS-569 Pro with Debian 9.9 'Stretch' (power on/off times are < 1 minute)
backup NAS: TS-559 Pro+ with QTS 4.2.6 #20190921

one.cd.only@gmail.com

Image Image Image Image

User avatar
oyvindo
Experience counts
Posts: 1010
Joined: Tue May 19, 2009 2:08 am
Location: Norway, Oslo

Re: No rebuild after migration of degraded RAID5

Post by oyvindo » Sun Sep 01, 2019 3:54 pm

Well, to be honest - I'm not impressed by QNAP support. During almost a full week, the support guy insisted that the replacement drive I inserted in the new NAS, was "used" and that it had partitions and data on it, and that was the reason the RAID rebuild wouldn't start. He ordered me to wipe the disk, or insert another brand new disk, over and over again - with no improvement or progress.
Then it dawned on me that the disk he was referring to was an external USB connected Seagate backup disk!! He mistakenly believed an external USB connected backup disk to be part of an internal RAID!! Not only that, but he himself reformatted my external backup disk without telling me. Im not a linux guru, but I know enough to develop a suspicion that this was his mistake. So I removed the external disk, and he then admitted that "the problem was gone".

Still, rebuilding the RAID - which he finally forced manually (he could have done this one week ago), did not solve the problem. I'm back to square one! I have been assisted by a service guy that hardly belong to level 1 service qualifications!

What can I do?
I do not know how to fix a degraded RAID and make it rebuild. Every time I reboot the NAS, it tells me that the RAID is in degraded mode, and ask me to hotplug a new disk. But when I do that - nothing happens. Absolutely nothing.
I wonder why isn't there a simple meny choice under Disk Managament that can enable an admin user to force a RAID rebuild?

I have bought a brand new QNAP NAS, so I've proven myself as a loyal customer - but what level of service do they offer? At best, it takes hours between every time support comes back to the ticket with an answer. (and now it's weekend off). And I am waiting, and waiting, and waiting. Maybe I'm waiting for a miracle?
NAS:
QNAP TS-453Be 16Gb
4x3TB RAID5
QTS 4.3.6
Madsonic 6.1 + LMS 7.9.2
Plex 1.03

QNAP HS-251 2G
2x2TB RAID0
QTS 4.3.6
Kodi 16.1
Rainloop 1.12

QNAP TS-119
Single Disk 1Tb
QTS 4.3.3

Post Reply

Return to “System & Disk Volume Management”