RAID 5 performance over 2,5gb network
-
- Starting out
- Posts: 35
- Joined: Sat Sep 24, 2011 4:57 pm
RAID 5 performance over 2,5gb network
Hello everyone, Yesterday I upgraded my home network infrastructure from the Gb standard to the 2.5Gb standard. All devices are wired with Cat6 cables and interconnected by 2.5/5Gb standard switches. I have a 653d that went from a 2Gb trunking to a 5Gb trunking. Before the upgrade, I saturated the Gb network routed to the main PC, both in reading and writing, with fixed performance of 115 MB/s with large file movements, so sequential reading and sequential writing on the NAS.
Now with the infrastructure upgraded, the main PC also went to 2.5Gb and testing everything, I stumbled upon a very anomalous behavior. Assuming that the main PC is a workstation that has no performance limitations of any kind compared to the speeds in play of writing and reading, I find myself with reading speeds from the Nas locked at 150-155MB/s and writing speeds on the Nas that instead oscillate between 260 and 280 MB/s. On the 653D having a RAID 5 of 3x 12TB Red Pro, it makes no sense. If I get such writing speeds, from my experiences I have always known that a RAID 5 had lower writing speeds compared to those of reading, because of the 4 steps it has to make in writing parity.
Am I missing something?
Thank you.
Now with the infrastructure upgraded, the main PC also went to 2.5Gb and testing everything, I stumbled upon a very anomalous behavior. Assuming that the main PC is a workstation that has no performance limitations of any kind compared to the speeds in play of writing and reading, I find myself with reading speeds from the Nas locked at 150-155MB/s and writing speeds on the Nas that instead oscillate between 260 and 280 MB/s. On the 653D having a RAID 5 of 3x 12TB Red Pro, it makes no sense. If I get such writing speeds, from my experiences I have always known that a RAID 5 had lower writing speeds compared to those of reading, because of the 4 steps it has to make in writing parity.
Am I missing something?
Thank you.
- dolbyman
- Guru
- Posts: 35253
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID 5 performance over 2,5gb network
just use one connection (no trunking) and test again
-
- Starting out
- Posts: 35
- Joined: Sat Sep 24, 2011 4:57 pm
Re: RAID 5 performance over 2,5gb network
Same result.
Read 150-155Mb/s
Write 260-280Mb/s
Tried on both single 2.5gb interface.
Really no sense for me
Some other explanation?
Read 150-155Mb/s
Write 260-280Mb/s
Tried on both single 2.5gb interface.
Really no sense for me
Some other explanation?
- dolbyman
- Guru
- Posts: 35253
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID 5 performance over 2,5gb network
any caching used or just the plain disks ?
Static or thin/tick volumes on a pool ?
Static or thin/tick volumes on a pool ?
-
- Starting out
- Posts: 35
- Joined: Sat Sep 24, 2011 4:57 pm
Re: RAID 5 performance over 2,5gb network
1) No cache
2) Static volume
2) Static volume
-
- Starting out
- Posts: 35
- Joined: Sat Sep 24, 2011 4:57 pm
Re: RAID 5 performance over 2,5gb network
Tested with iperf3
2.34 Gbits/sec both sender and receiver.
In addition to the RAID 5 group, I also have 3 single disk installed on the NAS. I get the same result with them (150 read / 230 write). I also tried restarting the NAS but it doesn’t change anything
Any other suggestion?
2.34 Gbits/sec both sender and receiver.
In addition to the RAID 5 group, I also have 3 single disk installed on the NAS. I get the same result with them (150 read / 230 write). I also tried restarting the NAS but it doesn’t change anything
Any other suggestion?
- dolbyman
- Guru
- Posts: 35253
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID 5 performance over 2,5gb network
probably a good idea to check with QNAP via ticket
-
- Starting out
- Posts: 35
- Joined: Sat Sep 24, 2011 4:57 pm
Re: RAID 5 performance over 2,5gb network
I opened a ticket, I will update when I will receive some other info.
Meanwhile I did also a local performance test, and appear evident the problem in the FS_throughput of the RAID volume.
Meanwhile I did also a local performance test, and appear evident the problem in the FS_throughput of the RAID volume.
Code: Select all
[~] # qcli_storage -T
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 & >/tmp/qcli_storage.log
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcl i_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput RAID RAID_Type RAID_Throughput Pool
NAS_HOST 1 /dev/sde 172.21 MB/s /dev/md1 RAID 5 299.29 MB/s 288
NAS_HOST 2 /dev/sdf 183.20 MB/s /dev/md1 RAID 5 299.29 MB/s 288
NAS_HOST 3 /dev/sdc 189.75 MB/s /dev/md1 RAID 5 299.29 MB/s 288
NAS_HOST 4 /dev/sdd 201.27 MB/s /dev/md2 Single 192.99 MB/s 289
NAS_HOST 5 /dev/sda 203.25 MB/s /dev/md3 Single 193.04 MB/s 290
NAS_HOST 6 /dev/sdb 205.59 MB/s /dev/md4 Single 198.38 MB/s 291
[~] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 DataVol1 288 /dev/mapper/cachedev1 280.10 MB/s /share/CACHEDEV1_DATA 157.44 MB/s
2 PLOT1 289 /dev/mapper/cachedev2 184.27 MB/s /share/CACHEDEV2_DATA 215.85 MB/s
3 PLOT2 290 /dev/mapper/cachedev3 200.03 MB/s /share/CACHEDEV3_DATA 207.12 MB/s
4 PLOT3 291 /dev/mapper/cachedev4 196.13 MB/s /share/CACHEDEV4_DATA 107.02 MB/s
- dolbyman
- Guru
- Posts: 35253
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID 5 performance over 2,5gb network
So yes .. very poor performance of Cachedev1.. here is my old 853BU with 6 drives in RAID6
Code: Select all
[/] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
This result was tested on Tue Jan 24 09:14:56 2023.
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 DataVol1 1 /dev/mapper/cachedev1 343.56 MB/s /share/CACHEDEV1_DATA 359.55 MB/s
[/] #
-
- Starting out
- Posts: 35
- Joined: Sat Sep 24, 2011 4:57 pm
RAID 5 performance over 2,5gb network
This the answer of Qnap.
“Good morning, The speed achieved is correct. A trunk port aggregates bandwidth, but does not increase speed.
A trunk port is a hit like adding a lane on a highway, so there will be more space to travel on that stretchy, but still the cars will travel at a maximum of 130km/h.
For better performance, usually either an infrastructure based on SSDs is used, or you can think of opting for 10GbE solutions, which will still need adequate computing and storage performance to the infrastructure.
Thank you”
I’m speechless… I didn’t think Qnap’s support had dropped to such a low level over time.
I naturally specified that I am getting 150MB/s in reading on the NAS and 270MB/s in writing on the NAS and my question on the ticket was: What’s limiting my reading speed on the NAS?
“Good morning, The speed achieved is correct. A trunk port aggregates bandwidth, but does not increase speed.
A trunk port is a hit like adding a lane on a highway, so there will be more space to travel on that stretchy, but still the cars will travel at a maximum of 130km/h.
For better performance, usually either an infrastructure based on SSDs is used, or you can think of opting for 10GbE solutions, which will still need adequate computing and storage performance to the infrastructure.
Thank you”
I’m speechless… I didn’t think Qnap’s support had dropped to such a low level over time.
I naturally specified that I am getting 150MB/s in reading on the NAS and 270MB/s in writing on the NAS and my question on the ticket was: What’s limiting my reading speed on the NAS?
- dolbyman
- Guru
- Posts: 35253
- Joined: Sat Feb 12, 2011 2:11 am
- Location: Vancouver BC , Canada
Re: RAID 5 performance over 2,5gb network
Just reply to them that a fs test revealed slow performance, the type and amount of networking adapters doesn't even come into play at that point
- Toxic17
- Ask me anything
- Posts: 6477
- Joined: Tue Jan 25, 2011 11:41 pm
- Location: Planet Earth
- Contact:
Re: RAID 5 performance over 2,5gb network
just ruling out other things whilst thinking out load.....
Was there any raid scrubbing or rebuilding running whilst the tests were running? have you checked files system and all S.M.A.R.T test have run on ALL disks and everything ok? just trying to rule anything out of the slow throughput.
Was there any raid scrubbing or rebuilding running whilst the tests were running? have you checked files system and all S.M.A.R.T test have run on ALL disks and everything ok? just trying to rule anything out of the slow throughput.
Regards Simon
Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following
NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following
NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
-
- Starting out
- Posts: 35
- Joined: Sat Sep 24, 2011 4:57 pm
RAID 5 performance over 2,5gb network
All Smart good all test passed,Toxic17 wrote:just ruling out other things whilst thinking out load.....
Was there any raid scrubbing or rebuilding running whilst the tests were running? have you checked files system and all S.M.A.R.T test have run on ALL disks and everything ok? just trying to rule anything out of the slow throughput.
No scrubbing or rebuilding during test.
Anyway now I have lost hope.
After the Qnap answer I tried to explain better the situation, maybe was my fault that kind of answer on the port trunking so, this my reply:
Code: Select all
Hello, thank you for the response and clarification on port trunking. Being a professional, I know how port trunking works. The ticket is due to the fact that the RAID 5 volume, as well as single disk volumes, have a GREATER write speed compared to the read speed. You will understand that this is quite strange, as the physical performance of the devices and especially the RAID groups always have higher read speeds than write speeds. You can also understand how 150MB/s in reading are low speeds even for HDDs of about 20 years ago, especially when the same RAID group has a write speed of 270MB/s. To eliminate any doubt of client-side problems, I performed a local performance test on the NAS. As you can see, the File System Cachedev1 in the first place, but also the others to a lesser extent, certainly has some software-level problems, but not hardware-level. I kindly ask for your support to understand the reasons.
[~] # qcli_storage -T
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 & >/tmp/qcli_storage.log
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcl i_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput RAID RAID_Type RAID_Throughput Pool
NAS_HOST 1 /dev/sde 172.21 MB/s /dev/md1 RAID 5 299.29 MB/s 288
NAS_HOST 2 /dev/sdf 183.20 MB/s /dev/md1 RAID 5 299.29 MB/s 288
NAS_HOST 3 /dev/sdc 189.75 MB/s /dev/md1 RAID 5 299.29 MB/s 288
NAS_HOST 4 /dev/sdd 201.27 MB/s /dev/md2 Single 192.99 MB/s 289
NAS_HOST 5 /dev/sda 203.25 MB/s /dev/md3 Single 193.04 MB/s 290
NAS_HOST 6 /dev/sdb 205.59 MB/s /dev/md4 Single 198.38 MB/s 291
[~] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 DataVol1 288 /dev/mapper/cachedev1 280.10 MB/s /share/CACHEDEV1_DATA 157.44 MB/s
2 PLOT1 289 /dev/mapper/cachedev2 184.27 MB/s /share/CACHEDEV2_DATA 215.85 MB/s
3 PLOT2 290 /dev/mapper/cachedev3 200.03 MB/s /share/CACHEDEV3_DATA 207.12 MB/s
4 PLOT3 291 /dev/mapper/cachedev4 196.13 MB/s /share/CACHEDEV4_DATA 107.02 MB/s
“Hello,
Is the disk of the PC from which you are conducting the tests an SSD or an HDD?
Can you also send us the specs of the computer?
Thank you.”
Really no more words.