Page 2 of 2

Re: expand raid 10?

Posted: Mon Nov 25, 2019 3:49 am
by P3R
zmaho wrote:
Sun Nov 24, 2019 11:47 pm
your issue is that one different drive...
It's 5 slow disks, 2 medium and one fast. Please explain how the faster disks can be an issue? In my experience the only performance issue is that the RAID will be held back by the slower disks.

If some faster disks actually were a problem, I would expect both RAID 6 and 10 to suffer pretty much equally from it. Data need to be written to all disks (including the slow ones) in both cases before the test can finish.
can you try same test but only first 7 drives in RAID ...
You have those numbers and every other RAID 6 and RAID 0 configuration possible in an 8-bay in the post where I showed the results of my testing.
and if using ``dd`` please try ``bs=4k`` , 8k, 64k, and 128k, ... :)
No. I did those tests months ago and it took probably two weeks with all the rebuilds necessary. That system is now in production and I won't take it down again for even more testing.

I have testing that support my position in the discussion, that at least with mechanical disks RAID 6 is faster when doing sequential writes with a decent CPU from 6 disks and up. Now please show us testing that support your position, that RAID 10 is always significantly faster.
algebra should be the same ... :) ... implementation can be the wrong .. but from what i got on ts1231 i ts1232 both 12xSSD 1|TB and 12x8TBHDD ... it is way faster raid 10 :)
The ARM CPUs aren't exactly known for being the most powerful CPUs on the planet...

With SSDs in particular, I think it's logical that the CPU become much more of a factor than with mechanical disks. Maybe you need a Ryzen to handle SSDs in RAID 6 better or maybe not even that is enough? I don't know as I don't have the amount of SSDs necessary for testing.

That in enterprise storage the disks used are normally much faster (SSD and SAS) may also be an explanation to why enterprise storage truths don't always apply on home and SMB NAS usage with larger numbers of mechanical SATA disks.

As you can see in the other test I linked, with mechanical disks a Celeron G550 (Passmark 2290) is enough for RAID 6 to be faster from 6 mechanical disks and up.

Re: expand raid 10?

Posted: Mon Nov 25, 2019 7:03 am
by zmaho
tes3085 on the way ... after new yea first will arrive ... then we can try r/w tests ...

Re: expand raid 10?

Posted: Fri Nov 29, 2019 4:23 pm
by P3R

Re: expand raid 10?

Posted: Fri Nov 29, 2019 5:38 pm
by storageman
macsimcon wrote:
Mon Aug 26, 2019 4:29 am
RAID 10 offers no advantages over RAID 6? RAID 10 writes three times faster than RAID 6, and when you need a rebuild, RAID 10 will be much faster. In fact, a rebuild in RAID 6 requires reading the parity data from the other drives in the RAID, which could cause an additional failure depending on how long it takes.

Have you tried rebuilding a RAID 6 array containing 8 TB, 10 TB, or 12 TB drives? It's going to take a long time, and the risk of additional drive failure(s) during the rebuild is significant.
We've had this argument many times here.
For RAID 10 to be faster both for random and sequential writes you would need to use more disks.
So instead of comparing a 12 disk RAID 6 with a 12 disk RAID 10, you would need to compare with a RAID 10 24 disks.
Generally with the same number of disks RAID 10 gIves higher IOPs than RAID 6 but not when it comes to sequential performance (certainly, not in the Qnap single controller world).

The rebuild times is an interesting question because when you rewrite parity the "parity writes megabytes qty" can be lower than the mirroring writes if the disks are full of data.
So mirrored RAID rebuild times could be slower in certain circumstances.

Re: expand raid 10?

Posted: Sat Nov 30, 2019 9:21 am
by zmaho
not before the march when second tes3085 arrive ... will test 24 or full 30 disk at RAID 10 .... speed and rebuild ... all flash .. SSD .. some chip ** like mx500 od wdblue 1td ...
i have them from other project sleeping somewhere ...

Re: expand raid 10?

Posted: Wed Apr 22, 2020 12:13 am
by i_love_lamp
Sorry to resurrect an old thread, but I stumbled upon it while researching RAID6 vs RAID10. The latest article on Ars Technica indicates that RAID10 performs significantly better than RAID6: ... -to-eight/

Their conclusion:
Assuming all reader eyes have not glossed over, we hope that you enjoyed this trip down the storage rabbit hole and learned something about what to expect from multiple disk arrays.

Veteran storage administrators generally know the overall takeaway here—RAID10 outperforms RAID5 or RAID6, particularly for synchronous operations—but we suspect few will have actually seen the results as presented. It's one thing to lean on theoretical rules of thumb; it's another thing entirely to empirically test and graph results.

We suspect that even many storage greybeards will be surprised at just how thoroughly RAID10 outperforms RAID6 across the board and without exception. Some may also be surprised at just how difficult it is to outperform—or even hold even with—single disk performance for the most challenging workloads, no matter what your topology.

After all this, we have several take-away recommendations borne out by our tests:
  • RAID10 for performance—including, but not limited to database performance. Yes, fileservers too!
  • RAID is not a backup. The parity/redundancy in the array mitigates downtime, not catastrophic failure. Back up your array!
  • Be extremely wary of RAID5—a single block of parity isn't sufficient for protection for any but the very smallest arrays.
  • RAID is not a backup. The parity/redundancy in the array mitigates downtime, not catastrophic failure. Back up your array!
  • At four disks wide, RAID6 vs RAID10 is a difficult decision—RAID6's performance hasn't dived into the toilet yet, and it can survive more guaranteed failures than RAID10. Choose wisely!
  • RAID is not a backup. The parity/redundancy in an array mitigates downtime, not catastrophic failure. Back up your array!
  • Wide RAID6 arrays are, for almost all realistic workloads, absolute hot garbage when it comes to performance. Be careful here—you get more effective storage than RAID10, but it may not be worth it.
  • Have we asked you to back up your RAID array yet? Please don't neglect your backups...

Re: expand raid 10?

Posted: Wed Apr 22, 2020 12:19 am
by dolbyman
This was a test with a hardware raid controller, go ahead test it yourself with the software raid QNAP and give us the numbers ..

mouth <-- money

Re: expand raid 10?

Posted: Wed Apr 22, 2020 2:45 am
by P3R
i_love_lamp wrote:
Wed Apr 22, 2020 12:13 am
Sorry to resurrect an old thread, but I stumbled upon it while researching RAID6 vs RAID10. The latest article on Ars Technica indicates that RAID10 performs significantly better than RAID6...
Let's look at what the article say:
"For the most part, kernel RAID significantly outperforms hardware RAID. This is due in part to vastly more active development and maintenance in the Linux kernel than you'll find in firmware for the cards. It's also worth noting that a typical modern server has tremendously faster CPU and more RAM available to it than a hardware RAID controller does."
That is one reason why testing done on Qnaps (as this is a Qnap forum, testing on Qnaps should be far more relevant here) come to a different result on sequential testing for all but the low end NASes.
"All tests performed here are random access..."
Yet the article claim the results are relevant also for sequential loads... :S
"...because nearly any real-world storage workload is random access."
Which is simply explained like that but that's just an extremely generalizing personal opinion, not a fact and no testing at all was done to back that claim.

I would agree that no NAS usage is 100% sequential but in my experience, a very large majority of Qnaps are used with home and SMB workloads that for the most part are sequential.

Re: expand raid 10?

Posted: Mon Jul 06, 2020 8:54 am
by rohanh
Raid 6 gets faster than raid 10 as you add more disks as the number of disks being striped goes up faster.

Raid 10 stripes across half the total number of disks
Raid 6 stripes across the total number of disks less 2

so at a 4 disk array they will be about the same as they are both striping 2 disks
at a 6 disk array raid 10 is striping 3 disks while raid 6 is striping 4
at a 8 disk array raid 10 is striping 4 disks while aid 6 is striping 6

If you look back at the test numbers the speeds are similar when the same number of disks are striping

i.e. raid 6 with 6 disks is same as raid 10 with 8 disks.

Raid 6 is more computationally intense and that may slow raid 6 versus raid 10 if the CPU that calculates the parity bits is the rate limit but otherwise raid 6 will be faster. I have a new TS-473 coming and I am planning to set it up with 4 x 500GB WD red Sata SSD and 2 x 4TB WD red. i will test it with the 4 SSD in both Raid 5, 6 and 10 to see whats fastest and the computational load for each ... which is why I came across this thread

I read lots of articles that says raid 6 is 2 or 3 times slower than raid 10 as it has to read data before it writes ? I dont understand that as if its writing fresh data it writes the stripe blocks and calculates and writes the parity bits and it does all that in parallel to the multiple disks so unless the parity bit computing slows the write down due to a cpu computational bottleneck it writes faster as it has more disks to write to per my notes above ? regardless of the theory its real world testing of your equipment installation and your data enviroment that counts