TS-1635, Slow disk r/w speeds via iSCSI
Posted: Mon Jul 31, 2017 3:38 pm
Hello, I am running the following:
Quanta LB6M Switch
Dell R610, Dual Xeon Hex Core, 64GB Memory connected to the Quanta via TwinAx 10Gbe, Running ESXi 6.5 U1
TS-1635 connected to the Quanta via TwinAx 10Gbe
TS-1635 has configured:
8 x 2TB Disks, RAID-10, and 2 additional Spare disks
2 x 3TB Disks, RAID-1, for the initial volume that was created
4 x 250GB Samsung 850 eVO SSD drives
I have several VM's created on the large RAID 10 volume with the spinning disks.
The SSD LUN contains my single Ubuntu Plex VM.
ESXi, and the Quanta switch are setup for Jumbo Frames.
Using Crystal Disk MArk with a 5 x 100MiB test I got the following:
Sequential Read (Q= 32,T= 1) : 573.987 MB/s
Sequential Write (Q= 32,T= 1) : 16.253 MB/s
Random Read 4KiB (Q= 32,T= 1) : 65.813 MB/s [ 16067.6 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 12.941 MB/s [ 3159.4 IOPS]
Sequential Read (T= 1) : 394.475 MB/s
Sequential Write (T= 1) : 11.744 MB/s
Random Read 4KiB (Q= 1,T= 1) : 10.804 MB/s [ 2637.7 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.232 MB/s [ 300.8 IOPS]
Is it normal for the write throughput to be that low?
I also run a PlexVM on the SSD Raid 10 Lun, and the output can really vary:
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 13518 MB in 2.00 seconds = 6763.25 MB/sec
Timing buffered disk reads: 2 MB in 7.83 seconds = 261.41 kB/sec
The highest result I've seen was 300MB/sec, but it's rare. VM is running Ubuntu 16.04.
Disk Information:
Disk /dev/sda: 320 GiB, 343597383680 bytes, 671088640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6622dbc3
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 999423 997376 487M 83 Linux
/dev/sda2 1001470 335542271 334540802 159.5G 5 Extended
/dev/sda3 999424 1001469 2046 1023K 8e Linux LVM
/dev/sda4 335542272 671088639 335546368 160G 8e Linux LVM
/dev/sda5 1001472 335542271 334540800 159.5G 8e Linux LVM
I've made sure to upgrade to ESXi 6.5 U1. I've even shut other VM's down, to reduce load but still cannot seem to get better write speeds, and the overall speeds on the Linux VM are abysmal.
I did find a post that mentioned running some internal QNAP tests and here are those results. On the qcli_storage -T test I do find it interesting that one of the SSD's had much lower throughput than the others.
[~] # qcli_storage -p
Enclosure Port Sys_Name Size Type RAID RAID_Type Pool TMeta VolType VolName
NAS_HOST 1 /dev/sdl 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 2 /dev/sdm 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 3 /dev/sdn 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 4 /dev/sdo 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 5 /dev/sdg 2.73 TB data /dev/md1 RAID 1 288 -- Static DataVol1
NAS_HOST 6 /dev/sdf 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 7 /dev/sde 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 8 /dev/sdd 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 9 /dev/sdc 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 10 /dev/sdb 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 11 /dev/sda 1.82 TB spare /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 12 /dev/sdp 1.82 TB spare /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 13 /dev/sdh 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 14 /dev/sdi 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 15 /dev/sdj 2.73 TB data /dev/md1 RAID 1 288 -- Static DataVol1
NAS_HOST 16 /dev/sdk 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
[~] # qcli_storage -T
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput RAID RAID_Type RAID_Throughput Pool
NAS_HOST 1 /dev/sdl 517.99 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 2 /dev/sdm 530.27 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 3 /dev/sdn 270.41 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 4 /dev/sdo 444.16 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 5 /dev/sdg 154.29 MB/s /dev/md1 RAID 1 154.29 MB/s 288
NAS_HOST 6 /dev/sdf 138.32 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 7 /dev/sde 133.89 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 8 /dev/sdd 125.64 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 9 /dev/sdc 138.18 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 10 /dev/sdb 112.31 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 11 /dev/sda 144.77 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 12 /dev/sdp 178.47 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 13 /dev/sdh 133.87 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 14 /dev/sdi 114.02 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 15 /dev/sdj 172.12 MB/s /dev/md1 RAID 1 154.29 MB/s 288
NAS_HOST 16 /dev/sdk 183.42 MB/s /dev/md3 RAID 10 297.16 MB/s 1
[~] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --directory=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 DataVol1 288 /dev/mapper/cachedev1 147.16 MB/s /share/CACHEDEV1_DATA 140.04 MB/s
What else can I look for to try and determine why my throughput is so low?
Thanks,
Marcus
Quanta LB6M Switch
Dell R610, Dual Xeon Hex Core, 64GB Memory connected to the Quanta via TwinAx 10Gbe, Running ESXi 6.5 U1
TS-1635 connected to the Quanta via TwinAx 10Gbe
TS-1635 has configured:
8 x 2TB Disks, RAID-10, and 2 additional Spare disks
2 x 3TB Disks, RAID-1, for the initial volume that was created
4 x 250GB Samsung 850 eVO SSD drives
I have several VM's created on the large RAID 10 volume with the spinning disks.
The SSD LUN contains my single Ubuntu Plex VM.
ESXi, and the Quanta switch are setup for Jumbo Frames.
Using Crystal Disk MArk with a 5 x 100MiB test I got the following:
Sequential Read (Q= 32,T= 1) : 573.987 MB/s
Sequential Write (Q= 32,T= 1) : 16.253 MB/s
Random Read 4KiB (Q= 32,T= 1) : 65.813 MB/s [ 16067.6 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 12.941 MB/s [ 3159.4 IOPS]
Sequential Read (T= 1) : 394.475 MB/s
Sequential Write (T= 1) : 11.744 MB/s
Random Read 4KiB (Q= 1,T= 1) : 10.804 MB/s [ 2637.7 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.232 MB/s [ 300.8 IOPS]
Is it normal for the write throughput to be that low?
I also run a PlexVM on the SSD Raid 10 Lun, and the output can really vary:
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 13518 MB in 2.00 seconds = 6763.25 MB/sec
Timing buffered disk reads: 2 MB in 7.83 seconds = 261.41 kB/sec
The highest result I've seen was 300MB/sec, but it's rare. VM is running Ubuntu 16.04.
Disk Information:
Disk /dev/sda: 320 GiB, 343597383680 bytes, 671088640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6622dbc3
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 999423 997376 487M 83 Linux
/dev/sda2 1001470 335542271 334540802 159.5G 5 Extended
/dev/sda3 999424 1001469 2046 1023K 8e Linux LVM
/dev/sda4 335542272 671088639 335546368 160G 8e Linux LVM
/dev/sda5 1001472 335542271 334540800 159.5G 8e Linux LVM
I've made sure to upgrade to ESXi 6.5 U1. I've even shut other VM's down, to reduce load but still cannot seem to get better write speeds, and the overall speeds on the Linux VM are abysmal.
I did find a post that mentioned running some internal QNAP tests and here are those results. On the qcli_storage -T test I do find it interesting that one of the SSD's had much lower throughput than the others.
[~] # qcli_storage -p
Enclosure Port Sys_Name Size Type RAID RAID_Type Pool TMeta VolType VolName
NAS_HOST 1 /dev/sdl 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 2 /dev/sdm 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 3 /dev/sdn 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 4 /dev/sdo 232.89 GB data /dev/md2 RAID 10,64 2 16 GB flexible LUN_0
NAS_HOST 5 /dev/sdg 2.73 TB data /dev/md1 RAID 1 288 -- Static DataVol1
NAS_HOST 6 /dev/sdf 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 7 /dev/sde 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 8 /dev/sdd 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 9 /dev/sdc 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 10 /dev/sdb 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 11 /dev/sda 1.82 TB spare /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 12 /dev/sdp 1.82 TB spare /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 13 /dev/sdh 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 14 /dev/sdi 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
NAS_HOST 15 /dev/sdj 2.73 TB data /dev/md1 RAID 1 288 -- Static DataVol1
NAS_HOST 16 /dev/sdk 1.82 TB data /dev/md3 RAID 10,64 1 16 GB flexible BigLun
[~] # qcli_storage -T
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput RAID RAID_Type RAID_Throughput Pool
NAS_HOST 1 /dev/sdl 517.99 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 2 /dev/sdm 530.27 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 3 /dev/sdn 270.41 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 4 /dev/sdo 444.16 MB/s /dev/md2 RAID 10 486.60 MB/s 2
NAS_HOST 5 /dev/sdg 154.29 MB/s /dev/md1 RAID 1 154.29 MB/s 288
NAS_HOST 6 /dev/sdf 138.32 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 7 /dev/sde 133.89 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 8 /dev/sdd 125.64 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 9 /dev/sdc 138.18 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 10 /dev/sdb 112.31 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 11 /dev/sda 144.77 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 12 /dev/sdp 178.47 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 13 /dev/sdh 133.87 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 14 /dev/sdi 114.02 MB/s /dev/md3 RAID 10 297.16 MB/s 1
NAS_HOST 15 /dev/sdj 172.12 MB/s /dev/md1 RAID 1 154.29 MB/s 288
NAS_HOST 16 /dev/sdk 183.42 MB/s /dev/md3 RAID 10 297.16 MB/s 1
[~] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --directory=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 DataVol1 288 /dev/mapper/cachedev1 147.16 MB/s /share/CACHEDEV1_DATA 140.04 MB/s
What else can I look for to try and determine why my throughput is so low?
Thanks,
Marcus