Raid array shows ERROR but not what

Questions about SNMP, Power, System, Logs, disk, & RAID.
reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Raid array shows ERROR but not what

Post by reading » Tue Feb 12, 2019 6:59 am

I have a Qnap 453be, RAID 10 4x8tb setup that I pulled a drive out on and didn't wait long enough to pull another after putting it back in (was trying to determine noise from a particular drive with Seagate tech support)

In doing so my RAID failed, or so it reported. The log says 2 drives failed, but md_checker only reports 1 having failed and in rebuild status. I am not sure if that means it's currently rebuilding or it needs a rebuild? If it needs a rebuild, how do I force it to do that?
md_check output.jpg
dh -h does not show my data. Storage/snapshots shows ERROR, Manage shows all 4 drives good but Not Active

I am currently scanning all disks for bad blocks. SMART status is reporting OK for all drives.
You do not have the required permissions to view the files attached to this post.

reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Wed Feb 13, 2019 7:10 am

Scanning all disks did nothing.

With 3 drives saying active and 1 "rebuild", I am not sure if this is hosed or it can still be fixed?

Can anyone give me some pointers?

reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Wed Feb 13, 2019 7:35 am

Found a little more info on this, anyone see a glimmer of hope. The 0 pages for md322 and md256 seem to me as bad news but I am new to this (never had an issue in RAID - always been set and forget until now). And yes I have a backup of most of the data so while nuking and starting over will be immensely annoying it can be done. I would love to try and fix this!

Output of: cat /proc/mdstat

Code: Select all

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md322 : active raid1 sdb5[3](S) sda5[2](S) sdd5[1] sdc5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdb2[3](S) sda2[2](S) sdd2[1] sdc2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[3] sdd4[1] sdb4[2] sdc4[0]
      458880 blocks super 1.0 [32/4] [UUUU____________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[3] sdd1[1] sdb1[2] sdc1[0]
      530048 blocks super 1.0 [32/4] [UUUU____________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
Output of: for i in /dev/sd[a-z]*; do echo $i; mdadm -E $i | egrep "Name|Size|Raid"; done;

Code: Select all

<*; do echo $i; mdadm -E $i | egrep "Name|Size|Raid"; done;
/dev/sda
/dev/sda1
           Name : 9
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060216 (517.77 MiB 542.83 MB)
     Array Size : 530048 (517.71 MiB 542.77 MB)
  Used Dev Size : 1060096 (517.71 MiB 542.77 MB)
/dev/sda2
           Name : 256
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 1060248 (517.79 MiB 542.85 MB)
     Array Size : 530112 (517.77 MiB 542.83 MB)
  Used Dev Size : 1060224 (517.77 MiB 542.83 MB)
/dev/sda3
           Name : 1
     Raid Level : raid10
   Raid Devices : 4
 Avail Dev Size : 15608143240 (7442.54 GiB 7991.37 GB)
     Array Size : 15608142848 (14885.09 GiB 15982.74 GB)
  Used Dev Size : 15608142848 (7442.54 GiB 7991.37 GB)
     Chunk Size : 512K
/dev/sda4
           Name : 13
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060256 (517.79 MiB 542.85 MB)
     Array Size : 458880 (448.20 MiB 469.89 MB)
  Used Dev Size : 917760 (448.20 MiB 469.89 MB)
/dev/sda5
           Name : 322
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 16707560 (7.97 GiB 8.55 GB)
     Array Size : 7235136 (6.90 GiB 7.41 GB)
  Used Dev Size : 14470272 (6.90 GiB 7.41 GB)
/dev/sdb
/dev/sdb1
           Name : 9
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060216 (517.77 MiB 542.83 MB)
     Array Size : 530048 (517.71 MiB 542.77 MB)
  Used Dev Size : 1060096 (517.71 MiB 542.77 MB)
/dev/sdb2
           Name : 256
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 1060248 (517.79 MiB 542.85 MB)
     Array Size : 530112 (517.77 MiB 542.83 MB)
  Used Dev Size : 1060224 (517.77 MiB 542.83 MB)
/dev/sdb3
           Name : 1
     Raid Level : raid10
   Raid Devices : 4
 Avail Dev Size : 15608143240 (7442.54 GiB 7991.37 GB)
     Array Size : 15608142848 (14885.09 GiB 15982.74 GB)
  Used Dev Size : 15608142848 (7442.54 GiB 7991.37 GB)
     Chunk Size : 512K
/dev/sdb4
           Name : 13
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060256 (517.79 MiB 542.85 MB)
     Array Size : 458880 (448.20 MiB 469.89 MB)
  Used Dev Size : 917760 (448.20 MiB 469.89 MB)
/dev/sdb5
           Name : 322
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 16707560 (7.97 GiB 8.55 GB)
     Array Size : 7235136 (6.90 GiB 7.41 GB)
  Used Dev Size : 14470272 (6.90 GiB 7.41 GB)
/dev/sdc
/dev/sdc1
           Name : 9
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060216 (517.77 MiB 542.83 MB)
     Array Size : 530048 (517.71 MiB 542.77 MB)
  Used Dev Size : 1060096 (517.71 MiB 542.77 MB)
/dev/sdc2
           Name : 256
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 1060248 (517.79 MiB 542.85 MB)
     Array Size : 530112 (517.77 MiB 542.83 MB)
  Used Dev Size : 1060224 (517.77 MiB 542.83 MB)
/dev/sdc3
           Name : 1
     Raid Level : raid10
   Raid Devices : 4
 Avail Dev Size : 15608143240 (7442.54 GiB 7991.37 GB)
     Array Size : 15608142848 (14885.09 GiB 15982.74 GB)
  Used Dev Size : 15608142848 (7442.54 GiB 7991.37 GB)
     Chunk Size : 512K
/dev/sdc4
           Name : 13
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060256 (517.79 MiB 542.85 MB)
     Array Size : 458880 (448.20 MiB 469.89 MB)
  Used Dev Size : 917760 (448.20 MiB 469.89 MB)
/dev/sdc5
           Name : 322
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 16707560 (7.97 GiB 8.55 GB)
     Array Size : 7235136 (6.90 GiB 7.41 GB)
  Used Dev Size : 14470272 (6.90 GiB 7.41 GB)
/dev/sdd
/dev/sdd1
           Name : 9
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060216 (517.77 MiB 542.83 MB)
     Array Size : 530048 (517.71 MiB 542.77 MB)
  Used Dev Size : 1060096 (517.71 MiB 542.77 MB)
/dev/sdd2
           Name : 256
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 1060248 (517.79 MiB 542.85 MB)
     Array Size : 530112 (517.77 MiB 542.83 MB)
  Used Dev Size : 1060224 (517.77 MiB 542.83 MB)
/dev/sdd3
           Name : 1
     Raid Level : raid10
   Raid Devices : 4
 Avail Dev Size : 15608143240 (7442.54 GiB 7991.37 GB)
     Array Size : 15608142848 (14885.09 GiB 15982.74 GB)
  Used Dev Size : 15608142848 (7442.54 GiB 7991.37 GB)
     Chunk Size : 512K
/dev/sdd4
           Name : 13
     Raid Level : raid1
   Raid Devices : 32
 Avail Dev Size : 1060256 (517.79 MiB 542.85 MB)
     Array Size : 458880 (448.20 MiB 469.89 MB)
  Used Dev Size : 917760 (448.20 MiB 469.89 MB)
/dev/sdd5
           Name : 322
     Raid Level : raid1
   Raid Devices : 2
 Avail Dev Size : 16707560 (7.97 GiB 8.55 GB)
     Array Size : 7235136 (6.90 GiB 7.41 GB)
  Used Dev Size : 14470272 (6.90 GiB 7.41 GB)

User avatar
storageman
Experience counts
Posts: 4796
Joined: Thu Sep 22, 2011 10:57 pm

Re: Raid array shows ERROR but not what

Post by storageman » Wed Feb 13, 2019 5:16 pm

Looks like a pending rebuild state.
Try
"/etc/init.d/init_lvm.sh"

reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Wed Feb 13, 2019 9:35 pm

storageman wrote:
Wed Feb 13, 2019 5:16 pm
Looks like a pending rebuild state.
Try
"/etc/init.d/init_lvm.sh"

result:

Code: Select all

[~] # /etc/init.d/init_lvm.sh
Changing old config name...
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 32)...
dev_count ++ = 2Detect disk(8, 48)...
dev_count ++ = 3Detect disk(8, 0)...
Detect disk(8, 16)...
Detect disk(8, 32)...
Detect disk(8, 48)...
sys_startup_p2:got called count = -1
Done

Now in QTS it shows no volume, brought up the new volume wizard when I opened it though.

No change in md_checker or cat /proc/mdstat/ outputs

User avatar
storageman
Experience counts
Posts: 4796
Joined: Thu Sep 22, 2011 10:57 pm

Re: Raid array shows ERROR but not what

Post by storageman » Wed Feb 13, 2019 11:13 pm

It's because the disk times are to far apart, try
mdadm -CfR --assume-clean /dev/md1 -l 10 -n 4 -c 64 -e 1.0 /dev/sdc3 /dev/sdd3 missing /dev/sda3
mdadm --zero-superblock /dev/sdb3
/etc/init.d/init_lvm.sh

reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Thu Feb 14, 2019 7:54 am

Making progress but QTS still sees no volume!

Output of your 3 commands:

Code: Select all

<lean /dev/md1 -l 10 -n 4 -c 64 -e 1.0 /dev/sdc3 /dev/sdd3 missing /dev/sda3
mdadm: /dev/sdc3 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Tue Feb  5 11:54:42 2019
mdadm --zero-superblock /dev/sdb3
mdadm: /dev/sdd3 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Tue Feb  5 11:54:42 2019
mdadm: /dev/sda3 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Tue Feb  5 11:54:42 2019
mdadm: array /dev/md1 started.
[~] # mdadm --zero-superblock /dev/sdb3
[~] # /etc/init.d/init_lvm.sh
Changing old config name...
mv: can't rename '/etc/config/ssdcache.conf': No such file or directory
mv: can't rename '/etc/config/qlvm.conf': No such file or directory
mv: can't rename '/etc/config/qdrbd.conf': No such file or directory
Reinitialing...
Detect disk(8, 0)...
dev_count ++ = 0Detect disk(8, 16)...
dev_count ++ = 1Detect disk(8, 32)...
dev_count ++ = 2Detect disk(8, 48)...
dev_count ++ = 3Detect disk(8, 0)...
Detect disk(8, 16)...
Detect disk(8, 32)...
Detect disk(8, 48)...
sys_startup_p2:got called count = -1
Done
And that of md_checker now:

Code: Select all

[~] # md_checker

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID:           df1006fe:0f251a54:1fb86a7b:d5b2b8fe
Level:          raid10
Devices:        4
Name:           md1
Chunk Size:     64K
md Version:     1.0
Creation Time:  Feb 13 15:50:46 2019
Status:         ONLINE (md1) [UU_U]
===============================================================================
 Disk | Device | # | Status |   Last Update Time   | Events | Array State
===============================================================================
   1  /dev/sdc3  0   Active   Feb 13 15:50:46 2019        0   AA.A              
   2  /dev/sdd3  1   Active   Feb 13 15:50:46 2019        0   AA.A              
 --------------  2  Missing   -------------------------------------------
   3  /dev/sda3  3   Active   Feb 13 15:50:46 2019        0   AA.A              
===============================================================================

User avatar
storageman
Experience counts
Posts: 4796
Joined: Thu Sep 22, 2011 10:57 pm

Re: Raid array shows ERROR but not what

Post by storageman » Thu Feb 14, 2019 4:24 pm

Reseat that drive, hopefully it will trigger rebuild
Volume should have come back
What does df say?

reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Thu Feb 14, 2019 5:26 pm

Code: Select all

[~] # df
Filesystem                Size      Used Available Use% Mounted on
none                    400.0M    301.3M     98.7M  75% /
devtmpfs                  7.7G      8.0K      7.7G   0% /dev
tmpfs                    64.0M    264.0K     63.7M   0% /tmp
tmpfs                     7.8G    132.0K      7.8G   0% /dev/shm
tmpfs                    16.0M         0     16.0M   0% /share
tmpfs                    16.0M         0     16.0M   0% /mnt/snapshot/export
/dev/md9                493.5M    129.4M    364.1M  26% /mnt/HDA_ROOT
cgroup_root               7.8G         0      7.8G   0% /sys/fs/cgroup
/dev/md13               417.0M    363.8M     53.2M  87% /mnt/ext
/dev/ram2               433.9M      2.3M    431.6M   1% /mnt/update
tmpfs                    64.0M      2.3M     61.7M   4% /samba
tmpfs                    16.0M     60.0K     15.9M   0% /samba/.samba/lock/msg.lock
tmpfs                    16.0M         0     16.0M   0% /mnt/ext/opt/samba/private/msg.sock
tmpfs                     1.0M         0      1.0M   0% /mnt/rf/nd

User avatar
storageman
Experience counts
Posts: 4796
Joined: Thu Sep 22, 2011 10:57 pm

Re: Raid array shows ERROR but not what

Post by storageman » Thu Feb 14, 2019 5:31 pm

Yes not mounted, try

dumpe2fs_64 -h /dev/mapper/cachedev1

reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Thu Feb 14, 2019 9:26 pm

Code: Select all

[~] # dumpe2fs_64 -h /dev/mapper/cachedev1
dumpe2fs 1.42.13 (17-May-2015)
dumpe2fs_64: No such file or directory while trying to open /dev/mapper/cachedev1
Couldn't find valid filesystem superblock.
I pulled the drive and put it back in as well and it didn't register. I will try a reboot

(edit) after reboot still no change. file and storage shows the drive as 'free'

User avatar
storageman
Experience counts
Posts: 4796
Joined: Thu Sep 22, 2011 10:57 pm

Re: Raid array shows ERROR but not what

Post by storageman » Thu Feb 14, 2019 11:06 pm

Hmm, Can you change to it to hot spare

reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Thu Feb 14, 2019 11:20 pm

I am unsure how to do that, is that something in ssh or qts?

User avatar
storageman
Experience counts
Posts: 4796
Joined: Thu Sep 22, 2011 10:57 pm

Re: Raid array shows ERROR but not what

Post by storageman » Thu Feb 14, 2019 11:58 pm


reading
Starting out
Posts: 10
Joined: Wed Feb 06, 2019 8:53 am

Re: Raid array shows ERROR but not what

Post by reading » Fri Feb 15, 2019 8:17 am

Even though Storage and Snapshots sees no volume/storage pool of any kind? It will not let me create a hot spare, I can only assume since there is nothing for it to be a hot spare too which stems from my earlier confusion. The link says to select a volume but ever since you had me run "/etc/init.d/init_lvm.sh" Storage and Snapshots no longer displays any volumes so when I try Hot Spare is grayed out.

Post Reply

Return to “System & Disk Volume Management”