Bob Zelin wrote:this is a btrfs RAID. It's from a well know company in the video industry, but I can't "name names" here. I have been dealing with RAIDs and particularly hardware raids for a long time, and I have never
seen such poor rebuilds - even from companies that I don't like. To me, if it is "acceptable" for a btrfs raid rebuild of a 120 TB array to take 8 days to rebuild (even if it is 50% full) - that is unacceptable.
Again, I am very ignorant of the btrfs file system.
Bob Zelin
btrfs raid 5 and 6 is far from stable. we were misled into believing by the dev, stable status, and then months later they found out serious issues with it....
Btrfs File-System Changes Submitted For Linux 4.10
16 December 2016
What makes the Btrfs updates for Linux 4.10 a bit less exciting is that it doesn't have a proper/full fix yet for the RAID 5/6 issue and the long-standing possible data corruption bug that is admittedly low-urgency but deals with reading a file with a hole positioned after an inline extent returning uninitalized memory when using Btrfs with zlib compression.
https://www.phoronix.com/scan.php?page= ... -56-Is-Bad
https://www.phoronix.com/scan.php?page= ... fs-Updates
some who had earlier adopted btrfs using it with regular mdadm , to avoid the issue of btrfs raid5/6 which is not ready for production use....
whether even then such a system is stable i don't know cause i don't personally have one to test myself
120 TB array to take 8 days
not sure if there is any solution to do with raid that can do better than this
or does anyone know ? my particular usage atm is sitting close to 10tb of which takes around 1-3 days rebuild time from first init for setup.
anyway if you want to continue this further might want to start a new thread since were going off topic