Page 1 of 6

Horrible Performance

Posted: Mon May 21, 2018 2:57 am
by rbeuke
I have a new system setup in a direct Connect Configuration.

NAS Port 1- 1 GbE >>> Connected to 1 Gig Router port 2

IP: 192.168.1.250
SNM: 255.255.255.0
GW: 192.168.1.1

PC: 1 GbE Port >>> Connected to 1 Gig Router port 1

IP: 192.168.1.6
SNM: 255.255.255.0
GW: 192.168.1.1

10 GbE NAS Port (QM2-2P10G1T) >>> Direct Connected to PC 10 GbE Nic (ASUS XG-C100C)

NAS IP: 192.168.2.250
NAS SNM: 255.255.255.0

PC IP: 192.168.2.251
PC SNM: 255.255.255.0

Jumbo Frames set on the PC 10 GbE NIC and on the QNAP 10 GBE Nic. The 10 gig nics are on their own subnet direct connected without a Gateway.

I took the default setting and created the raid 6 Array during startup which created a 21.8TB Pool, and a 10.62 TB Volume which i renamed to Storage

I create User with the same password as my PC, and then a share Named Storage on the QNAP and gave that user r/w access. Then i created a windows mapped drive S: pointing to \\192.168.2.250\Storage which is my share to the Thick Volume Volume

Getting Extremely slow speeds Crystal Diskmark Tests well below what i would expect from a NAS like this based on reviews of this and other units similar. First i had SMB set to 2.1 and it was slow, See SMB2.1-Test1 and SMB2.1-Test2.jpg

I then set SMB to version 3 and retested and the results are worse See SMB3-Test1.jpg and SMB3-Test2.jpg

Please help offer suggestions and or support on this as this does not seem right at all.

Ryan

Re: Horrible Performance

Posted: Mon May 21, 2018 2:59 am
by rbeuke
Also note the model is a TVS-673e with 6x Seagate 6 TB 7200 RPM drives and PC is windows 10.

Re: Horrible Performance

Posted: Mon May 21, 2018 6:19 pm
by storageman
Get it working simply before jumping into using Jumbo frames. Turn them off!
The poor sequential speeds point to a network issue.
What internal speeds are you getting?
Use Putty/SSH and run "qcli_storage -t"

Re: Horrible Performance

Posted: Mon May 21, 2018 8:34 pm
by rbeuke
Ok, I ran the command above and the output below looks much better.

[~] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw= read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp /qcli_storage.log
fio test command for File system: /sbin/fio --directory=test_device --direct=0 - -rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 -- size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 Storage 1 /dev/mapper/cachedev1 812.84 MB/s /share/CACHEDEV1_DATA 600.94 MB/s
[~] #


I turned off jumbo frames and run some tests again, i guess since its direct attached via a crossover cable i figured i didn't have much to worry about other than setting JF on the QNap 10 GbE interface and on the local 10 GbE interface and the performance is still nowhere near the output from the ssh command above.

Re: Horrible Performance

Posted: Mon May 21, 2018 8:50 pm
by rbeuke
I went ahead and mounted a share to the QNAP over the 1GbE interface that connects back to my Netgear Orbi router over the \\192.168.1.250\Storage mount point.

Thos look more in line with what i would expect from a 1GbE interface, its still strange that it has worse reads than writes in the 4k seq, 32 queue 1 thread test though.

Bad Cable on the 10GbE bad Nic on either side? The config looks relatively straight forward since like i mentioned above i have them both on a private subnet with no gateway.

Re: Horrible Performance

Posted: Mon May 21, 2018 9:13 pm
by Don
Did you check to see if they are actually connecting at 10GB?

Re: Horrible Performance

Posted: Mon May 21, 2018 9:37 pm
by rbeuke
Yes when you look in windows the nic shows 10 Gbe and the nic shows connected in QNAP.

and when i ran the Diskmark test i look in the monitor and i can see IO going over adapter 5 (10 GbE) and none over adapter 1 (1 GbE)

Additionally i know its connecting above 1GbE because it shows soem transmits over 125 MBps which is the max for a 1 Gig nic.

Re: Horrible Performance

Posted: Mon May 21, 2018 9:57 pm
by storageman
Can you use service binding to make sure IO only down 10GbE.

Separately reset service binding and try connecting 1GbE port to Asus card, the card should downgrade to 1GbE and does it also show poor reads?

I would suspect that Asus card.

Re: Horrible Performance

Posted: Mon May 21, 2018 10:00 pm
by Trexx
What specific model of 6TB drives are you using? What QTS version & build are you running? What are the specific make/model of 10GbE nics in both your QNAP & PC?

I would recommend installing QNAP's diagnostics utility and then running HDD Performance Test to check to see if you have any drives showing abnormal results. Although based on test you did above it sounds unlikely.

Re: Horrible Performance

Posted: Tue May 22, 2018 12:11 am
by rbeuke
@trexx 6 tb ironwolf.

10 GbE NAS Port (QM2-2P10G1T) >>> Direct Connected to PC 10 GbE Nic (ASUS XG-C100C)

Latest qts version.

I will install the diag tool later today does it uave a namem

@storage man can you provide details on service binding. If not ill google it later today.

Re: Horrible Performance

Posted: Tue May 22, 2018 12:17 am
by storageman
As I suspected crappy Asus card???
https://community.netgear.com/t5/Smart- ... -p/1369009

And usually I quite like Asus kit!

Maybe there's a Asus firmware fix?!?!

Re: Horrible Performance

Posted: Tue May 22, 2018 3:01 am
by rbeuke
Hmm i did update the driver ill look for firmware on their site later

Re: Horrible Performance

Posted: Tue May 22, 2018 3:47 pm
by storageman
Did you test it using 1GbE to it as I asked?

I would send your card back and get an Intel X520.

Re: Horrible Performance

Posted: Tue May 22, 2018 6:05 pm
by rbeuke
I didn't get a chance to do any more testing last night. It will probably be later today when i get back from work. I will report back.

Re: Horrible Performance

Posted: Wed May 23, 2018 7:04 am
by rbeuke
@storageman

Ok so i got home and got to thinking, could it be a bad CAT 6 crossover cable? I did a netstat -s and saw lots segments res transmitting and incrementing.

I swapped out with another older cat5e patch/crossover cable i had laying around and its actually better, higher than 125 MB over the same network card.
DisableJF10GbEcat5.JPG
I then turned on Jumbo Frames on both sides and ran the test again.
EnableJF10GbEcat5e.JPG
I then enabled SMB3 as the highest SMB versionand the results were about the same actually a little less in some tests and higher in another.
SMB3EnableJF10GbEcat5e.JPG
So it appears that it is a cable issue!!! Now i assume that the reason im not getting the 800 MB Read/ 600 MB Write throughput that im getting internal to the array is the cat5e cable vs a cat6 or cat7 cable?

Also is there any recommendation on wether you should set the SMB version to 2.1 vs 3? Looks like 3 did slightly better on the Random 4k with 32 threads test.