Page 1 of 2

expand raid 10?

Posted: Sun Mar 18, 2018 9:55 am
by bhom920
So i currently have a TVS-1272 and was wondering could i expand an existing raid 10 (4 x 6tb) for additional storage without affecting the existing data by installing an additional 2 x 6tb?

Re: expand raid 10?

Posted: Sun Mar 18, 2018 10:11 am
by Don
I don’t think so but check the manual for your expansion options.

Re: expand raid 10?

Posted: Mon Mar 19, 2018 8:55 pm
by P3R
bhom920 wrote:So i currently have a TVS-1272...
I can't find that model. Maybe you mean a TVS-1282 or TVS-1271U?
...was wondering could i expand an existing raid 10 (4 x 6tb) for additional storage without affecting the existing data by installing an additional 2 x 6tb?
No, that's one of the disadvantages for RAID 10 compared to RAID 5 and RAID 6.

From the manual:
Note: New disks cannot be inserted into existing RAID groups for specific RAID types, such as, RAID 0, RAID 10, Single, or JBOD. You must create additional RAID groups to expand these storage pools.

If using storage pools you could add the two additional disks as a RAID 1 group to the pool but it would be more of a workaround than a tecnically nice solution.

The better option would be to take this opportunity to switch from RAID 10 to RAID 6. That would give you better reliability and the possibility to do future storage expansion by adding disks. With RAID 6, disks don't have to be added in pairs but could be added one at a time for greater flexibility. Depending on the specific usage, performance may also improve with a switch to RAID 6.

Re: expand raid 10?

Posted: Thu Mar 22, 2018 4:05 am
by bhom920
sorry yes i have a TVS-1271u. I chose raid10 over 6 for performance. I guess i will need to use up another 4 bays to expand. Thanks

Re: expand raid 10?

Posted: Thu Mar 22, 2018 4:56 am
by Don
Rebuild from scratch and use RAID 6. RAID 10 offers no advantages over RAID 6 in modern systems. RAID can survive 2 failed disks. RAID 10 might survive 2 failed disks depending which 2 fail. RIAD 6 is expandable. RAID 10 is not. WIth newer systems RAID 10 offers little or no performance advantages.

Re: expand raid 10?

Posted: Mon Aug 26, 2019 4:29 am
by macsimcon
RAID 10 offers no advantages over RAID 6? RAID 10 writes three times faster than RAID 6, and when you need a rebuild, RAID 10 will be much faster. In fact, a rebuild in RAID 6 requires reading the parity data from the other drives in the RAID, which could cause an additional failure depending on how long it takes.

Have you tried rebuilding a RAID 6 array containing 8 TB, 10 TB, or 12 TB drives? It's going to take a long time, and the risk of additional drive failure(s) during the rebuild is significant.

Re: expand raid 10?

Posted: Mon Aug 26, 2019 5:28 am
by P3R
macsimcon wrote:
Mon Aug 26, 2019 4:29 am
RAID 10 writes three times faster than RAID 6...
That may have been true in NASes 10 years ago but I'm pretty sure it isn't today. Do you have any testing on a modern Qnap to back that bold statement up?

I'd say that given a good CPU (as exist in the majority of NAS models today) RAID 6 actually writes faster than RAID 10 from 6 disks and up.

This is an example with a Qnap TS-1277 showing the best and worse result from 5 tests. It's a sequential write directly to the volume from within the NAS itself to make sure it's the storage we're testing and that the network doesn't affect the numbers.

4 disks in raid-6 237.71 MB/s 229.19 MB/s
5 disks in raid-6 339.16 MB/s 324.55 MB/s
6 disks in raid-6 437.33 MB/s 422.79 MB/s
7 disks in raid-6 525.60 MB/s 495.07 MB/s
8 disks in raid-6 606.99 MB/s 589.82 MB/s

4 disks in raid-10 247.34 MB/s 245.06 MB/s
6 disks in raid-10 362.25 MB/s 356.38 MB/s
8 disks in raid-10 478.31 MB/s 472.27 MB/s

The disks are very old (+9 years) and slow so don't focus on the relatively low figures. Since it's the same disk in all tests, it's the comparison between RAID 6 and RAID 10 with the same number of disks that's interesting.
...and when you need a rebuild, RAID 10 will be much faster.
True, the rebuild with RAID 10 is faster.

The RAID 10 will however need to read one disk to mirror it when rebuilding the replaced disk and that disk being read is exactly the one critical disk that isn't allowed to fail. If it does, the complete array is lost. There's no redundancy on that critical disk.
In fact, a rebuild in RAID 6 requires reading the parity data from the other drives in the RAID, which could cause an additional failure depending on how long it takes.
True again, but what you forget to mention is that RAID 6 still have disk redundancy during the rebuild so another (any) disk is allowed to fail without causing a disaster.

Check out a RAID reliability calculator (here's one example), RAID 6 is more reliable than RAID 10 up to very large numbers of disks despite having slower rebuilds.

To sum up:
  • RAID 6 offer more usable storage than RAID 10 (from 6 disks and up).
  • RAID 6 writes faster than RAID 10 (from 6 disks and up with a decent CPU).
  • RAID 6 is more reliable than RAID 10 (at least up to something like 13-14 disks).
  • RAID 6 can be expanded by adding disks, a RAID 10 is always stuck at the number of disks it started with (at least in a Qnap).

Re: expand raid 10?

Posted: Sun Nov 24, 2019 8:04 am
by zmaho
HI,

first I want to apologize for resurrecting this topic but i can not wrap my mind around this ...

``RAID 6 writes faster than RAID 10 (from 6 disks and up with a decent CPU)`` ...

If you have all flash system, let say SSD SATAIII write/read speed is limited by interface .. so let say 550MBps ...
From all i know, RAID10 is like having RAID0 ... that is MIRROR in background to another RAID0 ... and in RAID0 all data is all over the RAID .. and
when you write data that WRITE is done by ALL DEVICE ... yes i do understand that when device get SATA command to write that they need to READ BLOCK, pun new smallblock on larger block then write it down ... so the speed is less then SINGLE DEVICE WRITE SPEED * number of disk in raid0 ... so ets say we have 10 devices ... speed wont be x10 . but it will be 8x ...
read speed is even better .. read request is divided by whole RAID10 ... so we should have SINGLE DEVICE SPEED * 10x2 ... ok .. 16x ...

and some on claim that raid6, with double prity and all reading, recalculating, writing then writing of actual data is faster then RAID10 ?!

please some on explain this ... is goes against the all i knew about RAID until this moment ...
and yes, some system can do RAID10 expansion ... by adding pairs of drives ... so like 2, 4, 8,... pairs :) ... and even data is rebalance :) ... it gonna take time .. shure..
this is why we use all flash now .... :)

Re: expand raid 10?

Posted: Sun Nov 24, 2019 9:26 am
by P3R
zmaho wrote:
Sun Nov 24, 2019 8:04 am
...and some on claim that raid6, with double prity and all reading, recalculating, writing then writing of actual data is faster then RAID10 ?!
Hello, that someone is me. I did a test with a single threaded sequentional write (within the system so networking or client bottlenecks are non-existant). You can call that a "claim" if you want but to me it felt more like facts when I did the tests.

I can't theoretically explain the results but in my world actual testing is superior to internet "truths", even if I can't explain why I see the results.

Call me a liar but then I'll challenge you to do your own testing (if you do, you will probably be the first RAID 10 fan here ready to question your "truths"). I would be happy to have a discussion about the results and maybe we can even eventually theoretically explain our findings?

By the way, I'm not the only and more importantly I wasn't the first one to test write RAID performance in here. This was the testing that inspired me. That test is extremely interresting as there you can see how improving CPU performance gradually make the RAID 6 parity calculations less of a bottleneck.
...and yes, some system can do RAID10 expansion ... by adding pairs of drives ... so like 2, 4, 8,... pairs :)
The OP in this thread have a Qnap so the answers given are based on what Qnap support.
this is why we use all flash now .... :)
Great for you. Do you work with enterprise storage?

Most users (+90%) here are in my experience home or SMB users (me included) and usually have the larger part of their storage on mechanical disks with at best some smaller SSDs for specific tasks.

Re: expand raid 10?

Posted: Sun Nov 24, 2019 9:38 am
by OneCD
P3R wrote:
Sun Nov 24, 2019 9:26 am
Most users (+90%) here are in my experience home or SMB users (me included) and usually have the larger part of their storage on mechanical disks with at best some smaller SSDs for specific tasks.
+1

As a home user with limited funds, my archive and high-availability arrays only use mechanical drives, with SSDs in my workstation, laptop & mediaplayer.

Re: expand raid 10?

Posted: Sun Nov 24, 2019 10:23 am
by zmaho
P3R wrote:
Sun Nov 24, 2019 9:26 am
when you make test it is important to declare what HW was it test, what NAS, what DRIVES and how did you take metric :)
what we know what NAS did you use, we know how old disks are ...
what we do not know disk model and was them all the same :) ...
and the most important question ... HOW DID YOU DONE your measurement .....
was it local by some custom application you wrote for QTS ? or is ti done via CIF/SMB or NFS or iSCSI ... over what interface ? was it a network issue ?
was was the client ? what software did you use ? what was the type of data ? file ? one large file ? directory of small files ?

yes we have some enterpise solution and yes we are waiting for first of two TES-3085 ...
and yes ...one will run QES ...maybe both ....i hope next year we buy few of the DC model ....
and yes we have non enterprise model like 3x1231, 2x1232, 2x431XeU,...

it a very strange that if all the drives are 100% same and 100% operational that is faster do write with extra writing of parity ... and all the
theory and all the RAID diagrams and calculators are wrong ..
don't you think ?

Re: expand raid 10?

Posted: Sun Nov 24, 2019 11:16 am
by P3R
zmaho wrote:
Sun Nov 24, 2019 10:23 am
what we do not know disk model and was them all the same :) ...
No, they're not the same.

These were the disks used:
Slot 1: Hitachi HDS722020ALA330
Slot 2: Hitachi HDS722020ALA330
Slot 3: Hitachi HDS722020ALA330
Slot 4: Hitachi HDS722020ALA330
Slot 5: Hitachi HDS722020ALA330
Slot 6: Hitachi HUA723030ALA640
Slot 7: Hitachi HUA723030ALA640
Slot 8: Seagate ST3000VN0001-1SF176

I always started from slot 1 when doing the different configurations so it was the 2 TB Hitachis only used in the smaller configurations and the Seagate was only used in the 8-disk configurations.
and the most important question ... HOW DID YOU DONE your measurement .....
was it local by some custom application you wrote for QTS ? or is ti done via CIF/SMB or NFS or iSCSI ... over what interface ? was it a network issue ?
was was the client ? what software did you use ? what was the type of data ? file ? one large file ? directory of small files ?
I used the same test as was used in the test I supplied a link to in my previous post. Writing with dd from the command line on the local system direct to the storage so no protocol, no network interfaces or clients could be a bottleneck.
it a very strange that if all the drives are 100% same and 100% operational that is faster do write with extra writing of parity ... and all the
theory and all the RAID diagrams and calculators are wrong ..
don't you think ?
A theory to start with is that enterprise storage usually have a very transaction intensive random load. RAID 10 perform better in that environment.

Home and SMB users usually have a load of one or few concurrent sequential requests (primarily media and/or backup usage). RAID 5/6 perform better there (as long as the CPU is decent).

The problem may be that enterprise storage performance experience automatically is considered applicable to all types of storage when the usage and load in smaller environments usually is very different.

Re: expand raid 10?

Posted: Sun Nov 24, 2019 11:47 pm
by zmaho
your issue is that one different drive... can you try same test but only first 7 drives in RAID ...
and if using ``dd`` please try ``bs=4k`` , 8k, 64k, and 128k, ... :) if you have time :) ...
algebra should be the same ... :) ... implementation can be the wrong .. but from what i got on ts1231 i ts1232 both 12xSSD 1|TB and 12x8TBHDD ... it is way faster raid 10 :)
even onts431XeU ...4 drives ..RAID10 and RAID6 ... :) raid 10 is way faster :) ... :)))

it is not that i do not trust you ... just can not ``wrap my mind`` on that ... that is slower ... only thing i can imagine is that one drive is slower then the rest, so that when RAID10 send write to one PAIR it get slower when involving that one .. and whole write is then slow ... while when doing raid6 ... only part of the stripe write is waiting ....

any one got idea ? qnap crew ?

Re: expand raid 10?

Posted: Mon Nov 25, 2019 12:34 am
by dolbyman
why don't YOU test it? you say you have lots of models at your disposal..then go ahead

also no need to ask for qnap here, it's a user forum, besides presales, they do not come here

Re: expand raid 10?

Posted: Mon Nov 25, 2019 2:09 am
by Don
zmaho wrote:
Sun Nov 24, 2019 11:47 pm
from what i got on ts1231 i ts1232 both 12xSSD 1|TB and 12x8TBHDD ... it is way faster raid 10 :)
even onts431XeU ...4 drives ..RAID10 and RAID6 ... :) raid 10 is way faster :) ... :)))
Then post your test results!