Horrible Performance

Discussion on setting up QNAP NAS products.
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Horrible Performance

Post by rbeuke »

I have a new system setup in a direct Connect Configuration.

NAS Port 1- 1 GbE >>> Connected to 1 Gig Router port 2

IP: 192.168.1.250
SNM: 255.255.255.0
GW: 192.168.1.1

PC: 1 GbE Port >>> Connected to 1 Gig Router port 1

IP: 192.168.1.6
SNM: 255.255.255.0
GW: 192.168.1.1

10 GbE NAS Port (QM2-2P10G1T) >>> Direct Connected to PC 10 GbE Nic (ASUS XG-C100C)

NAS IP: 192.168.2.250
NAS SNM: 255.255.255.0

PC IP: 192.168.2.251
PC SNM: 255.255.255.0

Jumbo Frames set on the PC 10 GbE NIC and on the QNAP 10 GBE Nic. The 10 gig nics are on their own subnet direct connected without a Gateway.

I took the default setting and created the raid 6 Array during startup which created a 21.8TB Pool, and a 10.62 TB Volume which i renamed to Storage

I create User with the same password as my PC, and then a share Named Storage on the QNAP and gave that user r/w access. Then i created a windows mapped drive S: pointing to \\192.168.2.250\Storage which is my share to the Thick Volume Volume

Getting Extremely slow speeds Crystal Diskmark Tests well below what i would expect from a NAS like this based on reviews of this and other units similar. First i had SMB set to 2.1 and it was slow, See SMB2.1-Test1 and SMB2.1-Test2.jpg

I then set SMB to version 3 and retested and the results are worse See SMB3-Test1.jpg and SMB3-Test2.jpg

Please help offer suggestions and or support on this as this does not seem right at all.

Ryan
You do not have the required permissions to view the files attached to this post.
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

Also note the model is a TVS-673e with 6x Seagate 6 TB 7200 RPM drives and PC is windows 10.
User avatar
storageman
Ask me anything
Posts: 5507
Joined: Thu Sep 22, 2011 10:57 pm

Re: Horrible Performance

Post by storageman »

Get it working simply before jumping into using Jumbo frames. Turn them off!
The poor sequential speeds point to a network issue.
What internal speeds are you getting?
Use Putty/SSH and run "qcli_storage -t"
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

Ok, I ran the command above and the output below looks much better.

[~] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw= read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp /qcli_storage.log
fio test command for File system: /sbin/fio --directory=test_device --direct=0 - -rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 -- size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 Storage 1 /dev/mapper/cachedev1 812.84 MB/s /share/CACHEDEV1_DATA 600.94 MB/s
[~] #


I turned off jumbo frames and run some tests again, i guess since its direct attached via a crossover cable i figured i didn't have much to worry about other than setting JF on the QNap 10 GbE interface and on the local 10 GbE interface and the performance is still nowhere near the output from the ssh command above.
You do not have the required permissions to view the files attached to this post.
Last edited by rbeuke on Mon May 21, 2018 8:51 pm, edited 1 time in total.
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

I went ahead and mounted a share to the QNAP over the 1GbE interface that connects back to my Netgear Orbi router over the \\192.168.1.250\Storage mount point.

Thos look more in line with what i would expect from a 1GbE interface, its still strange that it has worse reads than writes in the 4k seq, 32 queue 1 thread test though.

Bad Cable on the 10GbE bad Nic on either side? The config looks relatively straight forward since like i mentioned above i have them both on a private subnet with no gateway.
You do not have the required permissions to view the files attached to this post.
User avatar
Don
Guru
Posts: 12289
Joined: Thu Jan 03, 2008 4:56 am
Location: Long Island, New York

Re: Horrible Performance

Post by Don »

Did you check to see if they are actually connecting at 10GB?
Use the forum search feature before posting.

Use RAID and external backups. RAID will protect you from disk failure, keep your system running, and data accessible while the disk is replaced, and the RAID rebuilt. Backups will allow you to recover data that is lost or corrupted, or from system failure. One does not replace the other.

NAS: TVS-882BR | F/W: 5.0.1.2346 | 40GB | 2 x 1TB M.2 SATA RAID 1 (System/VMs) | 3 x 1TB M.2 NMVe QM2-4P-384A RAID 5 (cache) | 5 x 14TB Exos HDD RAID 6 (Data) | 1 x Blu-ray
NAS: TVS-h674 | F/W: 5.0.1.2376 | 16GB | 3 x 18TB RAID 5
Apps: DNSMasq, PLEX, iDrive, QVPN, QLMS, MP3fs, HBS3, Entware, DLstation, VS, +
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

Yes when you look in windows the nic shows 10 Gbe and the nic shows connected in QNAP.

and when i ran the Diskmark test i look in the monitor and i can see IO going over adapter 5 (10 GbE) and none over adapter 1 (1 GbE)

Additionally i know its connecting above 1GbE because it shows soem transmits over 125 MBps which is the max for a 1 Gig nic.
You do not have the required permissions to view the files attached to this post.
User avatar
storageman
Ask me anything
Posts: 5507
Joined: Thu Sep 22, 2011 10:57 pm

Re: Horrible Performance

Post by storageman »

Can you use service binding to make sure IO only down 10GbE.

Separately reset service binding and try connecting 1GbE port to Asus card, the card should downgrade to 1GbE and does it also show poor reads?

I would suspect that Asus card.
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: Horrible Performance

Post by Trexx »

What specific model of 6TB drives are you using? What QTS version & build are you running? What are the specific make/model of 10GbE nics in both your QNAP & PC?

I would recommend installing QNAP's diagnostics utility and then running HDD Performance Test to check to see if you have any drives showing abnormal results. Although based on test you did above it sounds unlikely.
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

@trexx 6 tb ironwolf.

10 GbE NAS Port (QM2-2P10G1T) >>> Direct Connected to PC 10 GbE Nic (ASUS XG-C100C)

Latest qts version.

I will install the diag tool later today does it uave a namem

@storage man can you provide details on service binding. If not ill google it later today.
User avatar
storageman
Ask me anything
Posts: 5507
Joined: Thu Sep 22, 2011 10:57 pm

Re: Horrible Performance

Post by storageman »

As I suspected crappy Asus card???
https://community.netgear.com/t5/Smart- ... -p/1369009

And usually I quite like Asus kit!

Maybe there's a Asus firmware fix?!?!
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

Hmm i did update the driver ill look for firmware on their site later
User avatar
storageman
Ask me anything
Posts: 5507
Joined: Thu Sep 22, 2011 10:57 pm

Re: Horrible Performance

Post by storageman »

Did you test it using 1GbE to it as I asked?

I would send your card back and get an Intel X520.
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

I didn't get a chance to do any more testing last night. It will probably be later today when i get back from work. I will report back.
rbeuke
Starting out
Posts: 22
Joined: Mon May 21, 2018 2:50 am

Re: Horrible Performance

Post by rbeuke »

@storageman

Ok so i got home and got to thinking, could it be a bad CAT 6 crossover cable? I did a netstat -s and saw lots segments res transmitting and incrementing.

I swapped out with another older cat5e patch/crossover cable i had laying around and its actually better, higher than 125 MB over the same network card.
DisableJF10GbEcat5.JPG
I then turned on Jumbo Frames on both sides and ran the test again.
EnableJF10GbEcat5e.JPG
I then enabled SMB3 as the highest SMB versionand the results were about the same actually a little less in some tests and higher in another.
SMB3EnableJF10GbEcat5e.JPG
So it appears that it is a cable issue!!! Now i assume that the reason im not getting the 800 MB Read/ 600 MB Write throughput that im getting internal to the array is the cat5e cable vs a cat6 or cat7 cable?

Also is there any recommendation on wether you should set the SMB version to 2.1 vs 3? Looks like 3 did slightly better on the Random 4k with 32 threads test.
You do not have the required permissions to view the files attached to this post.
Post Reply

Return to “Turbo Station Installation & Setup”