This is the replace step
1. Go to storage/snapshot and click manage.
2. Click replace disks one by one and select Disk 1 to replace.
3. Remove disk1 from my QNAP.
4. Replace the new drive to Disk1 slot.
5. I saw the new drive on QNAP user interface but the drive not automatically start the rebuild.
- Raid Group 1 status : Degraded
- NAS Host Disk 1 : Not Member
- NAS Host Disk 2 : Warning
6. I try to rebuild manually but don't have drive in the list like this.
https://gcdn.pbrd.co/images/RBzmEAXhV5Jf.png
https://gcdn.pbrd.co/images/9lPCis8cAFdp.png?o=1
7. ssh to NAS cat /proc/mdstat
Code: Select all
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb3[1]
1943559616 blocks super 1.0 [2/1] [_U]
md322 : active raid1 sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sda4[33] sdb4[32]
458880 blocks super 1.0 [32/2] [UU______________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sda1[33] sdb1[32]
530048 blocks super 1.0 [32/2] [UU______________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
Code: Select all
[~] # md_checker
Welcome to MD superblock checker (v2.0) - have a nice day~
Scanning system...
RAID metadata found!
UUID: a0b9594b:0e16a270:31375d09:c07cc21d
Level: raid1
Devices: 2
Name: md1
Chunk Size: -
md Version: 1.0
Creation Time: Sep 2 21:20:44 2018
Status: ONLINE (md1) [_U]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
---------------------------------- 0 Missing -------------------------------------------
NAS_HOST 2 /dev/sdb3 1 Active Nov 28 10:33:50 2021 287774 .A
===============================================================================================
Code: Select all
[~] # mdadm --add /dev/md1 /dev/sda3
mdadm: add new device failed for /dev/sda3 as 2: Invalid argument
Code: Select all
[~] # mdadm --examine /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x8
Array UUID : a0b9594b:0e16a270:31375d09:c07cc21d
Name : 1
Creation Time : Sun Sep 2 21:20:44 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB)
Array Size : 1943559616 (1853.52 GiB 1990.21 GB)
Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB)
Super Offset : 3887119504 sectors
Unused Space : before=0 sectors, after=272 sectors
State : clean
Device UUID : 863853ee:be4ef8b3:72947679:0602c8c9
Update Time : Sun Nov 28 10:35:29 2021
Checksum : e6140f71 - correct
Events : 0
Device Role : spare
Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
Code: Select all
[~] # mdadm --examine /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x8
Array UUID : a0b9594b:0e16a270:31375d09:c07cc21d
Name : 1
Creation Time : Sun Sep 2 21:20:44 2018
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3887119240 (1853.52 GiB 1990.21 GB)
Array Size : 1943559616 (1853.52 GiB 1990.21 GB)
Used Dev Size : 3887119232 (1853.52 GiB 1990.21 GB)
Super Offset : 3887119504 sectors
Unused Space : before=0 sectors, after=264 sectors
State : clean
Device UUID : 752d1fe6:f36c33f4:9d109171:36de5357
Update Time : Sun Nov 28 10:36:35 2021
Bad Block Log : 512 entries available at offset -8 sectors - bad blocks present.
Checksum : a3cddfae - correct
Events : 287876
Device Role : Active device 1
Array State : .A ('A' == active, '.' == missing, 'R' == replacing)
I had tried everything I can do but didn't work. Anyone have any suggestion so please feel free to let's me try.
Thank You in Advance.