TVS-872X - Fast write but slow read speed 8x Exos X16

Printers, HDDs, USB/eSATA drives, 3rd-party programs
Post Reply
Houbi
Starting out
Posts: 14
Joined: Sat Jun 08, 2013 6:00 pm

TVS-872X - Fast write but slow read speed 8x Exos X16

Post by Houbi »

Hello everyone

I am facing a weird issue. I get write speeds of about 850MB/s to 950MB/s but only 650MB/s read (from/to the 8 disk RAID6). I get 1.15GB/s write and read from my SSD RAID10. Tried with and without NVMe Caching. Testing client source drives, switches etc. are easily capable to handle 1.15GB/S read/write.

QNAP: TVS-872X
OS: QuTS hero h.4.5.3.1670
RAM: 32GB (2x16GB, Crucial. Swapped to original no change)
System/Apps: Samsung 980 Pro in RAID1 (in QM2-4P-384 Controller)
SSD: 4 x Samsung Evo 860 2TB in RAID10 (in QM2-4S-240 Controller)
HDD: 8 x Seagate Exos 14TB X16 in RAID6 (latest Firmware)
Cache: 2 x Samsung Evo 970 Plus 1TB (1.78TB read, 10GB ZIL, in QM2-4P-384 Controller)
Network: 10GbE, Static IP, MTU 9000, all Switches Jumbo Frames enabled, network is fine

Dedup is off, compression/caching on or off does not matter. IOTop shows between 1400MB/s and 1600MB/S while doing the read tests, but only 650MB/s arrive on my client.. Any ideas about how to troubleshoot this are highly welcome.


Thank you,
Houbi
User avatar
qrusher
Know my way around
Posts: 143
Joined: Thu Mar 15, 2018 9:51 pm
Location: R/U/serious
Contact:

Re: TVS-872X - Fast write but slow read speed 8x Exos X16

Post by qrusher »

do you have any SSD cache enabled?

ssh to the NAS and run the following commands. this should work on your NAS.

also

Code: Select all

qcli_storage -d

Code: Select all

qcli_storage -T force=1

Code: Select all

qcli_storage -t force=1
post the reports that come out of the NAS. this will show what your RAID's performance is capable of.
...
crush qnaps like a thanos crushing planets
Updates are compromise between old bug patches and new bugs introduced. 8)
do you backup..then kill it with fire
stuff:mikrotik Crs309-1g-8s+in & s+rj10 x4
Houbi
Starting out
Posts: 14
Joined: Sat Jun 08, 2013 6:00 pm

Re: TVS-872X - Fast write but slow read speed 8x Exos X16

Post by Houbi »

qrusher wrote: Sat Jun 19, 2021 12:32 am
do you have any SSD cache enabled?

ssh to the NAS and run the following commands. this should work on your NAS.

also

Code: Select all

qcli_storage -d

Code: Select all

qcli_storage -T force=1

Code: Select all

qcli_storage -t force=1
post the reports that come out of the NAS. this will show what your RAID's performance is capable of.
...
THX man. One of my Exos drives just died last night, will have to wait for a replacment first. Will update the thread later on
Houbi
Starting out
Posts: 14
Joined: Sat Jun 08, 2013 6:00 pm

Re: TVS-872X - Fast write but slow read speed 8x Exos X16

Post by Houbi »

qrusher wrote: Sat Jun 19, 2021 12:32 am

Code: Select all

qcli_storage -d
Enclosure Port Sys_Name Type Size Alias Signature Partitions Model
NAS_HOST 1 /dev/sdk HDD:data 12.73 TB Disk 1 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 2 /dev/sdl HDD:data 12.73 TB Disk 2 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 3 /dev/sdf HDD:data 12.73 TB Disk 3 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 4 /dev/sde HDD:data 12.73 TB Disk 4 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 5 /dev/sdd HDD:data 12.73 TB Disk 5 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 6 /dev/sdc HDD:data 12.73 TB Disk 6 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 7 /dev/sdb HDD:data 16.37 TB Disk 7 -- 5 Seagate ST18000NM000J-2TV103
NAS_HOST 8 /dev/sda HDD:data 12.73 TB Disk 8 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST P1-1 /dev/nvme0n1 SSD:cache 931.51 GB PCIe 1 M.2 SSD 1 -- 6 Samsung SSD 970 EVO Plus 1TB
NAS_HOST P1-2 /dev/nvme1n1 SSD:cache 931.51 GB PCIe 1 M.2 SSD 2 -- 6 Samsung SSD 970 EVO Plus 1TB
NAS_HOST P1-3 /dev/nvme2n1 SSD:data 465.76 GB PCIe 1 M.2 SSD 3 -- 5 Samsung SSD 980 PRO 500GB
NAS_HOST P1-4 /dev/nvme3n1 SSD:data 465.76 GB PCIe 1 M.2 SSD 4 -- 5 Samsung SSD 980 PRO 500GB
NAS_HOST P2-1 /dev/sdh SSD:data 1.82 TB PCIe 2 M.2 SSD 1 -- 5 Samsung SSD 860 EVO M.2 2TB
NAS_HOST P2-2 /dev/sdg SSD:data 1.82 TB PCIe 2 M.2 SSD 2 -- 5 Samsung SSD 860 EVO M.2 2TB
NAS_HOST P2-3 /dev/sdi SSD:data 1.82 TB PCIe 2 M.2 SSD 3 -- 5 Samsung SSD 860 EVO M.2 2TB
NAS_HOST P2-4 /dev/sdj SSD:data 1.82 TB PCIe 2 M.2 SSD 4 -- 5 Samsung SSD 860 EVO M.2 2TB

Code: Select all

qcli_storage -T force=1
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput Type Size Alias Signature Partitions Model
NAS_HOST 1 /dev/sdk -- HDD:data 12.73 TB Disk 1 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 2 /dev/sdl -- HDD:data 12.73 TB Disk 2 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 3 /dev/sdf -- HDD:data 12.73 TB Disk 3 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 4 /dev/sde -- HDD:data 12.73 TB Disk 4 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 5 /dev/sdd -- HDD:data 12.73 TB Disk 5 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 6 /dev/sdc -- HDD:data 12.73 TB Disk 6 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST 7 /dev/sdb -- HDD:data 16.37 TB Disk 7 -- 5 Seagate ST18000NM000J-2TV103
NAS_HOST 8 /dev/sda -- HDD:data 12.73 TB Disk 8 -- 5 Seagate ST14000NM001G-2KJ103
NAS_HOST P1-1 /dev/nvme0n1 -- SSD:cache 931.51 GB PCIe 1 M.2 SSD 1 -- 6 Samsung SSD 970 EVO Plus 1TB
NAS_HOST P1-2 /dev/nvme1n1 -- SSD:cache 931.51 GB PCIe 1 M.2 SSD 2 -- 6 Samsung SSD 970 EVO Plus 1TB
NAS_HOST P1-3 /dev/nvme2n1 -- SSD:data 465.76 GB PCIe 1 M.2 SSD 3 -- 5 Samsung SSD 980 PRO 500GB
NAS_HOST P1-4 /dev/nvme3n1 -- SSD:data 465.76 GB PCIe 1 M.2 SSD 4 -- 5 Samsung SSD 980 PRO 500GB
NAS_HOST P2-1 /dev/sdh -- SSD:data 1.82 TB PCIe 2 M.2 SSD 1 -- 5 Samsung SSD 860 EVO M.2 2TB
NAS_HOST P2-2 /dev/sdg -- SSD:data 1.82 TB PCIe 2 M.2 SSD 2 -- 5 Samsung SSD 860 EVO M.2 2TB
NAS_HOST P2-3 /dev/sdi -- SSD:data 1.82 TB PCIe 2 M.2 SSD 3 -- 5 Samsung SSD 860 EVO M.2 2TB
NAS_HOST P2-4 /dev/sdj -- SSD:data 1.82 TB PCIe 2 M.2 SSD 4 -- 5 Samsung SSD 860 EVO M.2 2TB

Code: Select all

qcli_storage -t force=1
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID SharedFolderName Pool Mapping_Name Mount_Path FS_Throughput
2 Public 1 zpool1/zfs2 /share/ZFS2_DATA ▒4@
3 Folder 3 zpool3/zfs18 /share/ZFS18_DATA ▒4@
4 Folder 2 zpool2/zfs19 /share/ZFS19_DATA ▒4@
5 Folder 3 zpool3/zfs20 /share/ZFS20_DATA ▒4@
6 Folder 3 zpool3/zfs21 /share/ZFS21_DATA ▒4@
7 Folder 3 zpool3/zfs22 /share/ZFS22_DATA ▒4@
8 Folder 3 zpool3/zfs23 /share/ZFS23_DATA ▒4@
9 Folder 3 zpool3/zfs24 /share/ZFS24_DATA ▒4@
10 Folder 3 zpool3/zfs25 /share/ZFS25_DATA ▒4@
11 Folder 3 zpool3/zfs26 /share/ZFS26_DATA ▒4@
12 Folder 3 zpool3/zfs27 /share/ZFS27_DATA ▒4@
13 Folder 3 zpool3/zfs29 /share/ZFS29_DATA ▒4@
14 Folder 3 zpool3/zfs30 /share/ZFS30_DATA ▒4@
15 Folder 3 zpool3/zfs31 /share/ZFS31_DATA ▒4@
16 Folder 3 zpool3/zfs32 /share/ZFS32_DATA ▒4@
17 Folder 3 zpool3/zfs33 /share/ZFS33_DATA ▒4@
18 Folder 3 zpool3/zfs34 /share/ZFS34_DATA ▒4@
19 Folder 3 zpool3/zfs35 /share/ZFS35_DATA ▒4@
20 Folder 1 zpool1/zfs36 /share/ZFS36_DATA ▒4@
I also opened up a case with QNAP, after back and forth for a while they today opened up a developers ticket for this case.

Best,
Houbi
Post Reply

Return to “Hardware & Software Compatibility”