Hello,
I add the HDDs as I go and test the migration from RAID1 -> RAID5 -> RAID6 (in the end 6 HDDs) on a TVS-872N NAS.
I was in the process of migrating from RAID1 to RAID5 with 3 HDDs of 16TB, announced time of 56 hours.
After 8 hours (there were therefore 48 hours remaining), power cut, restart and there are only 2 hours left to complete the migration!
I am checking all filesystems and so far nothing is wrong.
Why 56 hours announced and after a reboot, it ends in 10 hours?
Can someone enlighten me on this situation?
RAID1 to RAID5 migration with power failure and drastically reduced reconstruction time
-
- New here
- Posts: 9
- Joined: Tue Jul 19, 2016 9:45 pm
RAID1 to RAID5 migration with power failure and drastically reduced reconstruction time
[NAS] TVS-872N i7-8700t RAM 32Gb (2xTranscend TS2GSH64V6B), FW latest [QTS/VMs SSD Raid1] QM2-2P-384+2xSamsung 970 EVO Plus 1To [Data HDD Raid 6] 6xWD Gold 16Tb
[NAS] TS-453A 8Gb, FW latest [QTS/Data HDD Raid 5] 4xWD Red Plus 6 To (Backup)
[NAS] TS-219P II 512Mb, FW latest [QTS/Data HDD Raid 0] 2xHGST 4 To (Backup)
[UPS] Eaton Ellipse PRO 1200 FR
[NAS] TS-453A 8Gb, FW latest [QTS/Data HDD Raid 5] 4xWD Red Plus 6 To (Backup)
[NAS] TS-219P II 512Mb, FW latest [QTS/Data HDD Raid 0] 2xHGST 4 To (Backup)
[UPS] Eaton Ellipse PRO 1200 FR
- dolbyman
- Guru
- Posts: 35275
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID1 to RAID5 migration with power failure and drastically reduced reconstruction time
via SSH issue a
and
to see the status of the RAID
post the results in code tags
Code: Select all
cat /proc/mdstat
Code: Select all
md_checker
post the results in code tags
-
- New here
- Posts: 9
- Joined: Tue Jul 19, 2016 9:45 pm
Re: RAID1 to RAID5 migration with power failure and drastically reduced reconstruction time
Here is the result:
[~] # cat /proc/mdstat
[~] # md_checker
Everything seems correct
[~] # cat /proc/mdstat
Code: Select all
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]
md2 : active raid5 sdb3[0] sda3[2] sdc3[1]
31231837696 blocks super 1.0 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 nvme1n1p3[0] nvme0n1p3[1]
870126080 blocks super 1.0 [2/2] [UU]
md322 : active raid1 sda5[2](S) sdc5[1] sdb5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sda2[2](S) sdc2[1] sdb2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md321 : active raid1 nvme0n1p5[2] nvme1n1p5[0]
8283712 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 nvme1n1p4[0] sdc4[39] sdb4[34] sda4[35] nvme0n1p4[1]
458880 blocks super 1.0 [34/5] [UUU_U____U________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 nvme1n1p1[0] sdc1[40] sdb1[39] sda1[36] nvme0n1p1[1]
530048 blocks super 1.0 [36/5] [UUU_U____U__________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
Code: Select all
Welcome to MD superblock checker (v2.0) - have a nice day~
Scanning system...
RAID metadata found!
UUID: c0c8c229:6b46cea1:3a1cf3c6:93f61f09
Level: raid5
Devices: 3
Name: md2
Chunk Size: 64K
md Version: 1.0
Creation Time: Feb 6 11:28:34 2021
Status: ONLINE (md2) [UUU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sdb3 0 Active Feb 22 19:00:15 2021 16625 AAA
NAS_HOST 2 /dev/sdc3 1 Active Feb 22 19:00:15 2021 16625 AAA
NAS_HOST 3 /dev/sda3 2 Active Feb 22 19:00:15 2021 16625 AAA
===============================================================================================
RAID metadata found!
UUID: 987a236c:5697423f:907232b3:fa47e493
Level: raid1
Devices: 2
Name: md1
Chunk Size: -
md Version: 1.0
Creation Time: Jan 28 16:22:09 2021
Status: ONLINE (md1) [UU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST P1-1 /dev/nvme1n1p3 0 Active Feb 22 20:14:40 2021 12 AA
NAS_HOST P1-2 /dev/nvme0n1p3 1 Active Feb 22 20:14:40 2021 12 AA
===============================================================================================
[NAS] TVS-872N i7-8700t RAM 32Gb (2xTranscend TS2GSH64V6B), FW latest [QTS/VMs SSD Raid1] QM2-2P-384+2xSamsung 970 EVO Plus 1To [Data HDD Raid 6] 6xWD Gold 16Tb
[NAS] TS-453A 8Gb, FW latest [QTS/Data HDD Raid 5] 4xWD Red Plus 6 To (Backup)
[NAS] TS-219P II 512Mb, FW latest [QTS/Data HDD Raid 0] 2xHGST 4 To (Backup)
[UPS] Eaton Ellipse PRO 1200 FR
[NAS] TS-453A 8Gb, FW latest [QTS/Data HDD Raid 5] 4xWD Red Plus 6 To (Backup)
[NAS] TS-219P II 512Mb, FW latest [QTS/Data HDD Raid 0] 2xHGST 4 To (Backup)
[UPS] Eaton Ellipse PRO 1200 FR
- dolbyman
- Guru
- Posts: 35275
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID1 to RAID5 migration with power failure and drastically reduced reconstruction time
seems fine .. the disks on md13 and md9 look a bit weird [UUU_U____U__________________________] .. but as long as there is no issues, all should be good
-
- New here
- Posts: 9
- Joined: Tue Jul 19, 2016 9:45 pm
Re: RAID1 to RAID5 migration with power failure and drastically reduced reconstruction time
Thank you dolbyman for this analysis.
There are currently only 3 HDs in the NAS.
I think the others are tests with discs I made and some of which were faulty.
There are currently only 3 HDs in the NAS.
I think the others are tests with discs I made and some of which were faulty.
[NAS] TVS-872N i7-8700t RAM 32Gb (2xTranscend TS2GSH64V6B), FW latest [QTS/VMs SSD Raid1] QM2-2P-384+2xSamsung 970 EVO Plus 1To [Data HDD Raid 6] 6xWD Gold 16Tb
[NAS] TS-453A 8Gb, FW latest [QTS/Data HDD Raid 5] 4xWD Red Plus 6 To (Backup)
[NAS] TS-219P II 512Mb, FW latest [QTS/Data HDD Raid 0] 2xHGST 4 To (Backup)
[UPS] Eaton Ellipse PRO 1200 FR
[NAS] TS-453A 8Gb, FW latest [QTS/Data HDD Raid 5] 4xWD Red Plus 6 To (Backup)
[NAS] TS-219P II 512Mb, FW latest [QTS/Data HDD Raid 0] 2xHGST 4 To (Backup)
[UPS] Eaton Ellipse PRO 1200 FR