Disk Fail during Raid migration

Questions about SNMP, Power, System, Logs, disk, & RAID.
Locked
mctangus
New here
Posts: 3
Joined: Sat Jul 31, 2021 9:05 pm

Disk Fail during Raid migration

Post by mctangus »

Hi,

TVS-671 had 2 WD RED 3TB in Raid 1. Added 3rd WD Red 3tb converted to raid 5 all was well for a few days. Has 2 more disks arrive today 2 x 3TB ironwolf drives. Added both disks to the nas and under manage raid selected ADD drives. This seemed to trigger a raid migration automatically I'm assuming from Raid 5 to Raid 6 as it is now 5 disks in total. Would not be a problem except at about 4% into this the 3rd WD Red drive I added is now throwing up the following every 30 minutes.

[Hardware Status] "Host: Disk 3": Medium error. Run a bad block scan on the drive. Replace the drive if the error persists. Tried a quick smart test and it would not even complete at 90% eek.

Every error the smart information increases the following: Raw_Read_Error_Rate - Raw Value was 2 then over the last hour went upto 15.

The ID 197 and 198 from my understanding is the more critical. It did have 1 error but has since vanished. I'm guessing this drive is an rma for sure but what do I do. Let the migration finish if that is the best idea then safe remove it?

Seems strange it only happened during raid migration and did not show few days ago when I did long and short smart tests.
You do not have the required permissions to view the files attached to this post.
User avatar
dolbyman
Guru
Posts: 34903
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Disk Fail during Raid migration

Post by dolbyman »

raid rebuild and migration are the most vulnerable for a raid array

So the raid5 to raid5 migration(raid6 would needs to be specifically chosen) failed and you need to start from scratch and restore the data from backups
mctangus
New here
Posts: 3
Joined: Sat Jul 31, 2021 9:05 pm

Re: Disk Fail during Raid migration

Post by mctangus »

Hi, It's currently still migrating the raid, Thanks for clarifying its going from raid 5 to raid 5 much appreciated. It's at 35% so far. I can see no way to stop it all. I have all data backed up as I knew I was playing around with it I went with a 3-2-1 backup strategy.

Drive has thrown up a few more errors during the migration some pretty bad ones now Current Pending Sector errors are at 5 so drives definitely done for. Can I simply take the drive out or just let it try finish migration then safe remove the drive and replace it.
User avatar
dolbyman
Guru
Posts: 34903
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Disk Fail during Raid migration

Post by dolbyman »

It should be able to finish migration degraded (so taking a disk out is ok)...one more drive blip and it will fail though

I hope you have backups throughout all of this
holger_kuehn
Easy as a breeze
Posts: 413
Joined: Sun Oct 20, 2013 11:45 pm
Location: Premnitz, Germany

Re: Disk Fail during Raid migration

Post by holger_kuehn »

dolbyman wrote: Sun Aug 01, 2021 5:15 am I hope you have backups throughout all of this
For a change, there are backups present for once :D as was stated in the post before
mctangus wrote: Sun Aug 01, 2021 3:15 am I have all data backed up as I knew I was playing around with it I went with a 3-2-1 backup strategy.
NAS (production): TS-1635AX FW: QTS 5.1.4.2596 build 20231128
NAS (backup): TS-1635AX FW: QTS 5.1.4.2596 build 20231128
QTS (SSD): [RAID-1] 2 x 2TB Samsung Evo 860 M.2-Sata
Data (QTier): [RAID-6] 4 x 4TB Samsung 870 QVO Sata
Data (HDD): [RAID-6] 7 x 18TB Exos
RAM: 8 GB (QNAP shipped)
UPS: CyberPower CP900EPFCLCD
BACKUP: 10x4TB WD Red using a USB 3.0 Dock
Usage: SMB with rclone (encrypted)

NAS: TS-873U-RP FW: QTS 5.1.4.2596 build 20231128
Data (SSD): [RAID-10] 4 x 1TB Samsung Evo 860 Sata
RAM: 8 GB (QNAP shipped)
UPS: CyberPower PR2200ELCDRT2U
BACKUP: 4TB Synology DS214 FW: DSM 7.0.41890
Usage: SMB, Backup Domain Controller
mctangus
New here
Posts: 3
Joined: Sat Jul 31, 2021 9:05 pm

Re: Disk Fail during Raid migration

Post by mctangus »

Used to work in a Datacentre so I'm all about the backups 😂. Anyway migrating to these new disks was a blessing in disguise I guess as it absolutely hammered them it showed any weaknesses in some of my older drives. The WD 3TB red had about 1600 hours hardly any but that was the one that died first. It completed the migration but it also flagged near the end 1 of 2 matching old ironwolf 3tbs I had (26000 hours each) and it was even worse than the WD RED. Here is the errors. So today is go buy 3 new drives might splash out on some 6tb ones. Wipe the whole nas and start a fresh raid 5.

Thanks for all the help :)
also a dud.JPG
You do not have the required permissions to view the files attached to this post.
Locked

Return to “System & Disk Volume Management”