Remount while Rebuilding?

Questions about SNMP, Power, System, Logs, disk, & RAID.
Post Reply
pix135
First post
Posts: 1
Joined: Sun Jun 06, 2021 9:30 am

Remount while Rebuilding?

Post by pix135 »

My QNAP TVS-872XT crashed. It came back up and began resyncing, and as it started doing so, the volume became Unmounted.

I've had this happen before and followed the instructions in this article which usually works well.

This time however, I get the following, which I'm guessing might be because it's resyncing? How can I get past the "/dev/md1 is in use" error that prevents me from mounting the disk? Or is there some other way I should mount?

Code: Select all

[~] # e2fsck_64 -fp -C 0 /dev/mapper/cachedev1
/dev/mapper/cachedev1 is in use.
e2fsck: Cannot continue, aborting.
Here's what I've gotten output otherwise. I'm not very literate with the shell though.

Code: Select all

[~] # md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID:		61d00296:541f2495:b2b0cfb6:857b09f5
Level:		raid10
Devices:	8
Name:		md1
Chunk Size:	512K
md Version:	1.0
Creation Time:	May 12 18:49:27 2019
Status:         ONLINE (md1) [UUUUUUUU]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       1        /dev/sdf3   0   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
 NAS_HOST       2        /dev/sdh3   1   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
 NAS_HOST       3        /dev/sde3   2   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
 NAS_HOST       4        /dev/sdg3   3   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
 NAS_HOST       5        /dev/sdc3   4   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
 NAS_HOST       6        /dev/sdb3   5   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
 NAS_HOST       7        /dev/sdd3   6   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
 NAS_HOST       8        /dev/sda3   7   Active   Jan 21 10:11:31 2022 13685687   AAAAAAAA                 
===============================================================================================

Code: Select all

[~] # mdadm --detail /dev/md1
/dev/md1:
        Version : 1.0
  Creation Time : Sun May 12 18:49:27 2019
     Raid Level : raid10
     Array Size : 54649690112 (52118.01 GiB 55961.28 GB)
  Used Dev Size : 13662422528 (13029.50 GiB 13990.32 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Fri Jan 21 10:16:31 2022
          State : clean, resyncing 
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

  Resync Status : 0% complete

           Name : 1
           UUID : 61d00296:541f2495:b2b0cfb6:857b09f5
         Events : 13685688

    Number   Major   Minor   RaidDevice State
       0       8       83        0      active sync set-A   /dev/sdf3
       8       8      115        1      active sync set-B   /dev/sdh3
       2       8       67        2      active sync set-A   /dev/sde3
      10       8       99        3      active sync set-B   /dev/sdg3
       4       8       35        4      active sync set-A   /dev/sdc3
       9       8       19        5      active sync set-B   /dev/sdb3
       6       8       51        6      active sync set-A   /dev/sdd3
       7       8        3        7      active sync set-B   /dev/sda3
[~] # df -h
Filesystem                Size      Used Available Use% Mounted on
none                    400.0M    325.1M     74.9M  81% /
devtmpfs                  7.7G      8.0K      7.7G   0% /dev
tmpfs                    64.0M    780.0K     63.2M   1% /tmp
tmpfs                     7.8G    156.0K      7.8G   0% /dev/shm
tmpfs                    16.0M      4.0K     16.0M   0% /share
/dev/sdi5                 7.7M     47.0K      7.3M   1% /mnt/boot_config
tmpfs                    16.0M         0     16.0M   0% /mnt/snapshot/export
/dev/md9                493.5M    215.2M    278.2M  44% /mnt/HDA_ROOT
cgroup_root               7.8G         0      7.8G   0% /sys/fs/cgroup
/dev/md13               417.0M    393.5M     23.5M  94% /mnt/ext
tmpfs                    32.0M     27.2M      4.8M  85% /samba_third_party
tmpfs                     1.0M         0      1.0M   0% /share/external/.nd
tmpfs                     1.0M         0      1.0M   0% /share/external/.cm
tmpfs                     1.0M         0      1.0M   0% /mnt/hm/temp_mount
tmpfs                     8.0M         0      8.0M   0% /var/syslog_maildir
/dev/mapper/vg1-snap10001
                         40.3T     31.8T      8.6T  79% /mnt/snapshot/1/10001
/dev/mapper/vg1-snap10002
                         40.3T     31.8T      8.5T  79% /mnt/snapshot/1/10002
/dev/mapper/vg1-snap10003
                         40.3T     31.8T      8.5T  79% /mnt/snapshot/1/10003
/dev/mapper/vg1-snap10004
                         40.3T     31.8T      8.5T  79% /mnt/snapshot/1/10004
/dev/mapper/vg1-snap10005
                         40.3T     31.9T      8.5T  79% /mnt/snapshot/1/10005
/dev/mapper/vg1-snap10007
                         40.3T     32.0T      8.3T  79% /mnt/snapshot/1/10007
/dev/mapper/vg1-snap10008
                         40.3T     31.9T      8.5T  79% /mnt/snapshot/1/10008
/dev/mapper/cachedev2
                         19.2G     84.4M     18.7G   0% /share/CACHEDEV2_DATA
tmpfs                    64.0M      3.1M     60.9M   5% /samba
tmpfs                    48.0M     88.0K     47.9M   0% /samba/.samba/lock/msg.lock
tmpfs                    16.0M         0     16.0M   0% /mnt/ext/opt/samba/private/msg.sock

Code: Select all

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md1 : active raid10 sdf3[0] sda3[7] sdd3[6] sdb3[9] sdc3[4] sdg3[10] sde3[2] sdh3[8]
      54649690112 blocks super 1.0 512K chunks 2 near-copies [8/8] [UUUUUUUU]
      [>....................]  resync =  0.3% (191738944/54649690112) finish=32732.8min speed=27728K/sec
      
md322 : active raid1 sda5[7](S) sdd5[6](S) sdb5[5](S) sdc5[4](S) sdg5[3](S) sde5[2](S) sdh5[1] sdf5[0]
      7235136 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sda2[7](S) sdd2[6](S) sdb2[5](S) sdc2[4](S) sdg2[3](S) sde2[2](S) sdh2[1] sdf2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdf4[0] sda4[7] sdd4[6] sdh4[32] sdc4[4] sdg4[34] sde4[2] sdb4[33]
      458880 blocks super 1.0 [32/8] [UUUUUUUU________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdf1[0] sda1[7] sdd1[6] sdb1[33] sdc1[4] sdg1[34] sde1[2] sdh1[32]
      530048 blocks super 1.0 [32/8] [UUUUUUUU________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
Post Reply

Return to “System & Disk Volume Management”