After discussing, we confirm that NVME SSD now still not totally support as storage like HDD and SSD.
We recommend that NVME SSD only used as SSD cache.
This will be fixed on Firmware 4.4.0.0606 build20180619
NVMe Storage for VM not fully supported?
-
- Know my way around
- Posts: 214
- Joined: Sat Oct 22, 2011 6:54 pm
NVMe Storage for VM not fully supported?
I have a QNAP TS-453Be with QM2-2P10G1T and 2x PLEXTOR PX-256M8. I created a Windows 7 VM and placed on my NVMe storage and the VM would stop responding and Virtualization Station would lock up. I opened a ticket and after a month of investigation, below was the response. Anyone else having similar issues?
-
- Getting the hang of things
- Posts: 69
- Joined: Wed May 11, 2016 6:29 pm
Re: NVMe Storage for VM not fully supported?
Why did you choose this NVMe card for a TS-453Be?
This NAS will not use the speed that the NVMe drive have, as the PCIe-Port is too slow.
The normal SATA M.2 card is enough and this works together with the Virtualization Station.
This NAS will not use the speed that the NVMe drive have, as the PCIe-Port is too slow.
The normal SATA M.2 card is enough and this works together with the Virtualization Station.
-
- Know my way around
- Posts: 214
- Joined: Sat Oct 22, 2011 6:54 pm
Re: NVMe Storage for VM not fully supported?
I just realized that I never responded to your post, iMactouch.
I considered the SATA version of the card (QM2-2S10G1T), which for me was about the same price on Amazon, B&HPhoto, etc. I opted against it because I like the Plextor M8Pe 256GB drives, which are only available in NVMe. The Plextor M8Pe have the best endurance for that price at 384 Terra-byte Writes (TBW), compared to Samsung EVO 970 which is 150TBW. The Samsung 970 Pro, which has similar endurance numbers, doesn't come in 256GB size, so is more expensive at 512GB.
Also, the SATA interface is limited to 6Gbps -> 750MB/s, so using NVMe, even bottlenecked by the PCIe 2.0x2, will be a 33% bandwidth increase at 1000MB/s. The Virtual Machines that I'll put on the M.2 SSD will be able to use the extra bandwidth.
I considered the SATA version of the card (QM2-2S10G1T), which for me was about the same price on Amazon, B&HPhoto, etc. I opted against it because I like the Plextor M8Pe 256GB drives, which are only available in NVMe. The Plextor M8Pe have the best endurance for that price at 384 Terra-byte Writes (TBW), compared to Samsung EVO 970 which is 150TBW. The Samsung 970 Pro, which has similar endurance numbers, doesn't come in 256GB size, so is more expensive at 512GB.
Also, the SATA interface is limited to 6Gbps -> 750MB/s, so using NVMe, even bottlenecked by the PCIe 2.0x2, will be a 33% bandwidth increase at 1000MB/s. The Virtual Machines that I'll put on the M.2 SSD will be able to use the extra bandwidth.
-
- Starting out
- Posts: 38
- Joined: Sat Nov 13, 2010 10:18 am
Re: NVMe Storage for VM not fully supported?
Dang...I came across this thread by searching the same problem. Although I have a TS-1277, my VMs are hosted on the NVMe M2s and the VMs/VS stops responding randomly.
As the OP mentioned, the VMs greatly benefit from the NVMe.
I think I will be able to remove my SSDs as cache, re-initialize as storage for my VMs. Being new to QNAP and VS, I guess I simply move the VM folders to the new volume and then import them back into VS?
As the OP mentioned, the VMs greatly benefit from the NVMe.
I think I will be able to remove my SSDs as cache, re-initialize as storage for my VMs. Being new to QNAP and VS, I guess I simply move the VM folders to the new volume and then import them back into VS?
Production NAS: TS-1277-1600-64G FW: 4.3.5.x
RAM: 64GB QNAP
OS/Apps: 2 x Samsung 860 Evo 1TB M.2s (RAID-1)
Cache: 2 x Samsung 860 Evo 250GB 2.5 SSD (RAID-1)
VMs: 2 x Samsung 970 Evo 1TB M.2s NVMe (RAID-1)
Data: 6 x 12TB Seagate Enterprise Helium (RAID-6)
Network: QNAP Dual-Port 10GbE SFP+
UPS: APC Smart UPS 1500C
Media Boxes: AppleTV 4, Nvidia ShieldTV Pro, Roku Sticks
Backup NAS: TS-873-4G FW: 4.3.6.x
RAM: 16GB Kingston HyperX Fury Kit DDR4-2666
Network: QNAP Dual-Port 10GbE SFP+
RAM: 64GB QNAP
OS/Apps: 2 x Samsung 860 Evo 1TB M.2s (RAID-1)
Cache: 2 x Samsung 860 Evo 250GB 2.5 SSD (RAID-1)
VMs: 2 x Samsung 970 Evo 1TB M.2s NVMe (RAID-1)
Data: 6 x 12TB Seagate Enterprise Helium (RAID-6)
Network: QNAP Dual-Port 10GbE SFP+
UPS: APC Smart UPS 1500C
Media Boxes: AppleTV 4, Nvidia ShieldTV Pro, Roku Sticks
Backup NAS: TS-873-4G FW: 4.3.6.x
RAM: 16GB Kingston HyperX Fury Kit DDR4-2666
Network: QNAP Dual-Port 10GbE SFP+
- Trexx
- Ask me anything
- Posts: 5393
- Joined: Sat Oct 01, 2011 7:50 am
- Location: Minnesota
Re: NVMe Storage for VM not fully supported?
Open Helpdesk tickets with QNAP. I have VM's running fine on my sata SSD volume, so not sure why NVMe is causing it. Sounds like HW related issue.
Paul
Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350
Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350
Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
-
- Know my way around
- Posts: 214
- Joined: Sat Oct 22, 2011 6:54 pm
Re: NVMe Storage for VM not fully supported?
Help desk says that SATA should work fine, but that there is an issue with NVMe and virtual machines that should be corrected in version 4.4.0. Don't know if it's a virtual machine app issue or a NAS operating system issue that VMs are affected by. I've been waiting almost 4 months for a fix to this!
- Trexx
- Ask me anything
- Posts: 5393
- Joined: Sat Oct 01, 2011 7:50 am
- Location: Minnesota
Re: NVMe Storage for VM not fully supported?
If it is QTS 4.4.0, could be some either/both. QTS 4.4.0 will have new kernel baseline, which either solves the problem, or is needed to facilitate a VS core update.321liftoff wrote: ↑Wed Dec 19, 2018 3:33 am Help desk says that SATA should work fine, but that there is an issue with NVMe and virtual machines that should be corrected in version 4.4.0. Don't know if it's a virtual machine app issue or a NAS operating system issue that VMs are affected by. I've been waiting almost 4 months for a fix to this!
Paul
Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350
Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350
Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
-
- New here
- Posts: 2
- Joined: Tue Dec 25, 2018 3:13 am
Re: NVMe Storage for VM not fully supported?
Hope in QTS 4.4.0 this feature come....
-
- Know my way around
- Posts: 214
- Joined: Sat Oct 22, 2011 6:54 pm
Re: NVMe Storage for VM not fully supported?
I downloaded QTS 4.4.1 beta. I'm running Virtualization Station 3.2.355 and Windows 10 as a guest OS, and performance seems to be much better.
I have a QNAP TS-453Be with QM2-2P10G1T and 2x PLEXTOR PX-256M8 in RAID 1 static volume with 20% Over-provisioning. Below are some benchmark results that are lower that I would have expected:
I have a QNAP TS-453Be with QM2-2P10G1T and 2x PLEXTOR PX-256M8 in RAID 1 static volume with 20% Over-provisioning. Below are some benchmark results that are lower that I would have expected:
- 786.44 MB/s sequential read (1M) raid
- 757.40 MB/s sequential read (1M) filesystem
- 414.6MB/s sequential read (8k)
- 335.9MB/s sequential write (32k)
- 600.4MB/s random read (8k)
- 343.2MB/s random write (64k)
- 485.8MB/s random read/write (16k)
Code: Select all
[/share/CACHEDEV2_DATA/SSDVirtualMachine] # qcli_storage -T
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput RAID RAID_Type RAID_Throughput Pool
NAS_HOST 1 /dev/sdc 130.27 MB/s /dev/md1 RAID 6 68.40 MB/s 1
NAS_HOST 2 /dev/sdd 122.66 MB/s /dev/md1 RAID 6 68.40 MB/s 1
NAS_HOST 3 /dev/sda 147.62 MB/s /dev/md1 RAID 6 68.40 MB/s 1
NAS_HOST 4 /dev/sdb 120.33 MB/s /dev/md1 RAID 6 68.40 MB/s 1
NAS_HOST P1-1 /dev/nvme1n1 772.84 MB/s /dev/md2 RAID 1 786.44 MB/s 288
NAS_HOST P1-2 /dev/nvme0n1 769.51 MB/s /dev/md2 RAID 1 786.44 MB/s 288
Code: Select all
[/share/CACHEDEV2_DATA/SSDVirtualMachine] # qcli_storage -t
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 DataVol1 1 /dev/mapper/cachedev1 68.27 MB/s /share/CACHEDEV1_DATA 69.60 MB/s
2 DataVol2 288 /dev/mapper/cachedev2 786.95 MB/s /share/CACHEDEV2_DATA 757.40 MB/s
Code: Select all
[/share/CACHEDEV2_DATA/SSDVirtualMachine] # fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=8 --size=1G --runtime=600 --group_reporting
seqread: (g=0): rw=read, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
Jobs: 2 (f=1): [_(5),R(2),_(1)] [100.0% done] [414.6MB/0KB/0KB /s] [53.7K/0/0 iops] [eta 00m:00s]
seqread: (groupid=0, jobs=8): err= 0: pid=26812: Sun Jun 2 15:45:12 2019
read : io=8192.0MB, bw=469555KB/s, iops=58694, runt= 17865msec
slat (usec): min=12, max=17841, avg=19.94, stdev=46.99
clat (usec): min=1, max=17789, avg=111.92, stdev=119.45
lat (usec): min=65, max=17955, avg=132.56, stdev=129.08
clat percentiles (usec):
| 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 73],
| 30.00th=[ 89], 40.00th=[ 101], 50.00th=[ 108], 60.00th=[ 115],
| 70.00th=[ 121], 80.00th=[ 131], 90.00th=[ 149], 95.00th=[ 169],
| 99.00th=[ 241], 99.50th=[ 330], 99.90th=[ 1144], 99.95th=[ 2064],
| 99.99th=[ 5408]
bw (KB /s): min=47088, max=65728, per=12.59%, avg=59128.35, stdev=2720.50
lat (usec) : 2=0.01%, 4=0.15%, 10=0.01%, 20=0.01%, 50=0.17%
lat (usec) : 100=37.64%, 250=61.10%, 500=0.61%, 750=0.13%, 1000=0.05%
lat (msec) : 2=0.06%, 4=0.04%, 10=0.01%, 20=0.01%
cpu : usr=4.57%, sys=17.32%, ctx=1075335, majf=0, minf=232
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=8192.0MB, aggrb=469555KB/s, minb=469555KB/s, maxb=469555KB/s, mint=17865msec, maxt=17865msec
Disk stats (read/write):
dm-103: ios=1048471/48, merge=0/0, ticks=104363/117, in_queue=105440, util=99.58%, aggrios=1048576/51, aggrmerge=0/0, aggrticks=101825/116, aggrin_queue=103049, aggrutil=99.53%
dm-10: ios=1048576/51, merge=0/0, ticks=101825/116, in_queue=103049, util=99.53%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1048576/51, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=1048576/51, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=524288/73, aggrmerge=0/1, aggrticks=50365/37, aggrin_queue=35750, aggrutil=95.43%
nvme0n1: ios=550375/73, merge=0/1, ticks=53166/40, in_queue=38503, util=95.43%
nvme1n1: ios=498201/73, merge=0/1, ticks=47565/35, in_queue=32998, util=92.69%
Code: Select all
[/share/CACHEDEV2_DATA/SSDVirtualMachine] # fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=4 --size=2G --runtime=600 --group_reporting
seqwrite: (g=0): rw=write, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 4 processes
Jobs: 4 (f=4): [W(4)] [100.0% done] [0KB/335.9MB/0KB /s] [0/10.8K/0 iops] [eta 00m:00s]
seqwrite: (groupid=0, jobs=4): err= 0: pid=30050: Sun Jun 2 15:46:36 2019
write: io=8192.0MB, bw=346479KB/s, iops=10827, runt= 24211msec
slat (usec): min=20, max=24006, avg=44.49, stdev=104.60
clat (usec): min=2, max=15739, avg=318.53, stdev=211.22
lat (usec): min=135, max=24232, avg=363.88, stdev=234.32
clat percentiles (usec):
| 1.00th=[ 137], 5.00th=[ 199], 10.00th=[ 233], 20.00th=[ 270],
| 30.00th=[ 290], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 322],
| 70.00th=[ 330], 80.00th=[ 342], 90.00th=[ 370], 95.00th=[ 394],
| 99.00th=[ 660], 99.50th=[ 1064], 99.90th=[ 3216], 99.95th=[ 4384],
| 99.99th=[ 8096]
bw (KB /s): min=78976, max=92544, per=25.05%, avg=86799.40, stdev=2660.01
lat (usec) : 4=0.01%, 10=0.02%, 20=0.01%, 50=0.01%, 100=0.01%
lat (usec) : 250=13.93%, 500=84.39%, 750=0.78%, 1000=0.29%
lat (msec) : 2=0.31%, 4=0.18%, 10=0.05%, 20=0.01%
cpu : usr=4.74%, sys=11.70%, ctx=267688, majf=0, minf=105
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=8192.0MB, aggrb=346479KB/s, minb=346479KB/s, maxb=346479KB/s, mint=24211msec, maxt=24211msec
Disk stats (read/write):
dm-103: ios=2/261677, merge=0/0, ticks=2/82634, in_queue=82773, util=99.37%, aggrios=2/262289, aggrmerge=0/0, aggrticks=2/81936, aggrin_queue=82037, aggrutil=99.19%
dm-10: ios=2/262289, merge=0/0, ticks=2/81936, in_queue=82037, util=99.19%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=2/262289, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=2/262289, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1/262306, aggrmerge=0/6, aggrticks=1/71645, aggrin_queue=55427, aggrutil=96.11%
nvme0n1: ios=1/262306, merge=0/6, ticks=1/74072, in_queue=57733, util=96.11%
nvme1n1: ios=1/262306, merge=0/6, ticks=1/69218, in_queue=53121, util=95.48%
Code: Select all
[/share/CACHEDEV2_DATA/SSDVirtualMachine] # fio --name=randread --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=16 --size=1G --runtime=600 --group_reporting
randread: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 16 processes
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 9 (f=7): [r(2),_(1),r(1),_(1),r(1),_(1),r(1),_(2),r(1),_(2),r(3)] [100.0% done] [600.4MB/0KB/0KB /s] [76.9K/0/0 iops] [eta 00m:00s]
randread: (groupid=0, jobs=16): err= 0: pid=5820: Sun Jun 2 15:49:13 2019
read : io=16384MB, bw=638864KB/s, iops=79858, runt= 26261msec
slat (usec): min=12, max=33887, avg=23.61, stdev=85.45
clat (usec): min=1, max=22782, avg=168.88, stdev=176.48
lat (usec): min=49, max=33985, avg=193.43, stdev=198.95
clat percentiles (usec):
| 1.00th=[ 92], 5.00th=[ 107], 10.00th=[ 114], 20.00th=[ 123],
| 30.00th=[ 131], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 155],
| 70.00th=[ 167], 80.00th=[ 187], 90.00th=[ 225], 95.00th=[ 274],
| 99.00th=[ 506], 99.50th=[ 772], 99.90th=[ 2416], 99.95th=[ 3536],
| 99.99th=[ 7328]
bw (KB /s): min=27936, max=52992, per=6.32%, avg=40403.89, stdev=3954.79
lat (usec) : 2=0.03%, 4=0.42%, 10=0.03%, 20=0.02%, 50=0.07%
lat (usec) : 100=1.51%, 250=91.16%, 500=5.74%, 750=0.50%, 1000=0.20%
lat (msec) : 2=0.19%, 4=0.10%, 10=0.03%, 20=0.01%, 50=0.01%
cpu : usr=3.39%, sys=11.31%, ctx=2231529, majf=0, minf=520
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=2097152/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=16384MB, aggrb=638864KB/s, minb=638864KB/s, maxb=638864KB/s, mint=26261msec, maxt=26261msec
Disk stats (read/write):
dm-103: ios=2096456/28, merge=0/0, ticks=289497/82, in_queue=292714, util=99.93%, aggrios=2097152/30, aggrmerge=0/0, aggrticks=283241/82, aggrin_queue=286773, aggrutil=99.98%
dm-10: ios=2097152/30, merge=0/0, ticks=283241/82, in_queue=286773, util=99.98%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=2097152/30, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=2097152/30, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1048576/56, aggrmerge=0/12, aggrticks=139041/39, aggrin_queue=113978, aggrutil=99.38%
nvme0n1: ios=1083541/56, merge=0/12, ticks=144908/39, in_queue=119789, util=99.38%
nvme1n1: ios=1013611/56, merge=0/12, ticks=133174/40, in_queue=108168, util=98.89%
Code: Select all
[/share/CACHEDEV2_DATA/SSDVirtualMachine] # fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=64k --numjobs=8 --size=512m --runtime=600 --group_reporting
randwrite: (g=0): rw=randwrite, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
Jobs: 8 (f=8): [w(8)] [100.0% done] [0KB/343.2MB/0KB /s] [0/5490/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=8): err= 0: pid=10505: Sun Jun 2 15:50:11 2019
write: io=4096.0MB, bw=351341KB/s, iops=5489, runt= 11938msec
slat (usec): min=34, max=17637, avg=76.29, stdev=185.23
clat (usec): min=214, max=14232, avg=1368.17, stdev=540.81
lat (usec): min=266, max=18840, avg=1445.31, stdev=567.93
clat percentiles (usec):
| 1.00th=[ 414], 5.00th=[ 860], 10.00th=[ 1096], 20.00th=[ 1256],
| 30.00th=[ 1304], 40.00th=[ 1320], 50.00th=[ 1336], 60.00th=[ 1352],
| 70.00th=[ 1368], 80.00th=[ 1384], 90.00th=[ 1448], 95.00th=[ 1736],
| 99.00th=[ 4048], 99.50th=[ 4960], 99.90th=[ 7136], 99.95th=[ 8256],
| 99.99th=[12992]
bw (KB /s): min=37888, max=47648, per=12.51%, avg=43942.38, stdev=1555.00
lat (usec) : 250=0.19%, 500=1.42%, 750=1.81%, 1000=4.15%
lat (msec) : 2=88.88%, 4=2.52%, 10=1.01%, 20=0.02%
cpu : usr=2.30%, sys=4.53%, ctx=66597, majf=0, minf=237
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=65536/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=351340KB/s, minb=351340KB/s, maxb=351340KB/s, mint=11938msec, maxt=11938msec
Disk stats (read/write):
dm-103: ios=0/65044, merge=0/0, ticks=0/82803, in_queue=82985, util=98.64%, aggrios=0/65939, aggrmerge=0/0, aggrticks=0/83510, aggrin_queue=83660, aggrutil=98.45%
dm-10: ios=0/65939, merge=0/0, ticks=0/83510, in_queue=83660, util=98.45%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/65939, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=0/65939, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/65598, aggrmerge=0/378, aggrticks=0/77059, aggrin_queue=69833, aggrutil=96.94%
nvme0n1: ios=0/65598, merge=0/378, ticks=0/76721, in_queue=69475, util=96.94%
nvme1n1: ios=0/65598, merge=0/378, ticks=0/77397, in_queue=70192, util=96.88%
Code: Select all
[/share/CACHEDEV2_DATA/SSDVirtualMachine] # fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --rwmixread=90 --size=1G --runtime=600 --group_reporting
randrw: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 8 (f=8): [m(8)] [100.0% done] [485.8MB/55992KB/0KB /s] [31.9K/3499/0 iops] [eta 00m:00s]
randrw: (groupid=0, jobs=8): err= 0: pid=18253: Sun Jun 2 15:51:24 2019
read : io=7354.2MB, bw=482299KB/s, iops=30143, runt= 15614msec
slat (usec): min=13, max=14696, avg=22.81, stdev=58.21
clat (usec): min=1, max=14210, avg=218.38, stdev=275.95
lat (usec): min=101, max=15877, avg=241.94, stdev=282.57
clat percentiles (usec):
| 1.00th=[ 119], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 147],
| 30.00th=[ 155], 40.00th=[ 163], 50.00th=[ 171], 60.00th=[ 181],
| 70.00th=[ 193], 80.00th=[ 209], 90.00th=[ 243], 95.00th=[ 298],
| 99.00th=[ 1704], 99.50th=[ 1816], 99.90th=[ 2992], 99.95th=[ 3728],
| 99.99th=[ 9280]
bw (KB /s): min=53748, max=68288, per=12.57%, avg=60608.70, stdev=2522.67
write: io=857984KB, bw=54950KB/s, iops=3434, runt= 15614msec
slat (usec): min=18, max=9544, avg=37.58, stdev=107.93
clat (usec): min=1, max=8729, avg=113.82, stdev=106.53
lat (usec): min=86, max=9551, avg=152.19, stdev=152.31
clat percentiles (usec):
| 1.00th=[ 67], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 80],
| 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 101], 60.00th=[ 109],
| 70.00th=[ 119], 80.00th=[ 135], 90.00th=[ 157], 95.00th=[ 181],
| 99.00th=[ 278], 99.50th=[ 402], 99.90th=[ 1256], 99.95th=[ 2416],
| 99.99th=[ 4576]
bw (KB /s): min= 5760, max= 8032, per=12.60%, avg=6924.03, stdev=443.43
lat (usec) : 2=0.01%, 4=0.07%, 10=0.02%, 20=0.01%, 50=0.03%
lat (usec) : 100=5.01%, 250=86.92%, 500=4.88%, 750=0.36%, 1000=1.06%
lat (msec) : 2=1.45%, 4=0.17%, 10=0.03%, 20=0.01%
cpu : usr=3.52%, sys=12.03%, ctx=538021, majf=0, minf=234
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=470664/w=53624/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=7354.2MB, aggrb=482299KB/s, minb=482299KB/s, maxb=482299KB/s, mint=15614msec, maxt=15614msec
WRITE: io=857984KB, aggrb=54949KB/s, minb=54949KB/s, maxb=54949KB/s, mint=15614msec, maxt=15614msec
Disk stats (read/write):
dm-103: ios=468896/53500, merge=0/0, ticks=94542/5702, in_queue=100761, util=99.51%, aggrios=470664/53634, aggrmerge=0/0, aggrticks=93600/5570, aggrin_queue=99814, aggrutil=99.42%
dm-10: ios=470664/53634, merge=0/0, ticks=93600/5570, in_queue=99814, util=99.42%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=470664/53634, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=470664/53634, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=235332/53636, aggrmerge=0/0, aggrticks=46509/4532, aggrin_queue=37832, aggrutil=97.77%
nvme0n1: ios=247017/53636, merge=0/0, ticks=49246/4496, in_queue=40350, util=97.77%
nvme1n1: ios=223647/53636, merge=0/0, ticks=43772/4569, in_queue=35314, util=96.07%
-
- Know my way around
- Posts: 214
- Joined: Sat Oct 22, 2011 6:54 pm
Re: NVMe Storage for VM not fully supported?
And results after changing the two NVME cards to RAID0.
- 670.49MB/s sequential read (1M) raid
- 748.54MB/s sequential read (1M) filesystem
- 401.1MB/s sequential read (8k)
- 548.6MB/s sequential write (32k)
- 332.2MB/s random read (8k)
- 682.1MB/s random write (64k)
- 458.8MB/s random read/write (16k)
Code: Select all
[~] # qcli_storage -T force=1
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput RAID RAID_Type RAID_Throughput Pool
NAS_HOST 1 /dev/sdc 119.25 MB/s /dev/md1 RAID 6 64.81 MB/s 1
NAS_HOST 2 /dev/sdd 105.34 MB/s /dev/md1 RAID 6 64.81 MB/s 1
NAS_HOST 3 /dev/sda 122.55 MB/s /dev/md1 RAID 6 64.81 MB/s 1
NAS_HOST 4 /dev/sdb 105.50 MB/s /dev/md1 RAID 6 64.81 MB/s 1
NAS_HOST P1-1 /dev/nvme1n1 768.83 MB/s /dev/md2 RAID 0 670.49 MB/s 288
NAS_HOST P1-2 /dev/nvme0n1 768.85 MB/s /dev/md2 RAID 0 670.49 MB/s 288
Code: Select all
[~] # qcli_storage -t force=1
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 DataVol1 1 /dev/mapper/cachedev1 30.51 MB/s /share/CACHEDEV1_DATA 77.53 MB/s
2 DataVol2 288 /dev/mapper/cachedev2 774.48 MB/s /share/CACHEDEV2_DATA 748.54 MB/s
Code: Select all
[/share/CACHEDEV2_DATA] # fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=8 --size=1G --runtime=600 --group_reporting
seqread: (g=0): rw=read, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
seqread: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 8 (f=8): [R(8)] [100.0% done] [401.1MB/0KB/0KB /s] [51.5K/0/0 iops] [eta 00m:00s]
seqread: (groupid=0, jobs=8): err= 0: pid=10496: Sun Jun 2 23:12:00 2019
read : io=8192.0MB, bw=430141KB/s, iops=53767, runt= 19502msec
slat (usec): min=10, max=22046, avg=18.29, stdev=86.38
clat (usec): min=1, max=23132, avg=124.58, stdev=206.15
lat (usec): min=41, max=23144, avg=143.68, stdev=225.72
clat percentiles (usec):
| 1.00th=[ 54], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 70],
| 30.00th=[ 84], 40.00th=[ 99], 50.00th=[ 107], 60.00th=[ 115],
| 70.00th=[ 123], 80.00th=[ 137], 90.00th=[ 167], 95.00th=[ 209],
| 99.00th=[ 474], 99.50th=[ 828], 99.90th=[ 2960], 99.95th=[ 4128],
| 99.99th=[ 8512]
bw (KB /s): min=39328, max=62832, per=12.67%, avg=54500.98, stdev=4439.23
lat (usec) : 2=0.04%, 4=0.31%, 10=0.04%, 20=0.02%, 50=0.28%
lat (usec) : 100=40.42%, 250=55.76%, 500=2.21%, 750=0.37%, 1000=0.15%
lat (msec) : 2=0.22%, 4=0.13%, 10=0.05%, 20=0.01%, 50=0.01%
cpu : usr=3.95%, sys=12.98%, ctx=1099193, majf=0, minf=232
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=8192.0MB, aggrb=430140KB/s, minb=430140KB/s, maxb=430140KB/s, mint=19502msec, maxt=19502msec
Disk stats (read/write):
dm-96: ios=1044715/4, merge=0/0, ticks=97990/0, in_queue=98593, util=98.73%, aggrios=1048576/4, aggrmerge=0/0, aggrticks=95937/0, aggrin_queue=96630, aggrutil=98.26%
dm-95: ios=1048576/4, merge=0/0, ticks=95937/0, in_queue=96630, util=98.26%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1048576/4, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=1048576/4, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=524288/8, aggrmerge=0/3, aggrticks=47452/7, aggrin_queue=32405, aggrutil=77.83%
nvme0n1: ios=524288/7, merge=0/2, ticks=48097/8, in_queue=33008, util=77.83%
nvme1n1: ios=524288/9, merge=0/4, ticks=46807/7, in_queue=31803, util=76.66%
Code: Select all
[/share/CACHEDEV2_DATA] # fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=4 --size=2G --runtime=600 --group_reporting
seqwrite: (g=0): rw=write, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 4 processes
seqwrite: Laying out IO file(s) (1 file(s) / 2048MB)
seqwrite: Laying out IO file(s) (1 file(s) / 2048MB)
seqwrite: Laying out IO file(s) (1 file(s) / 2048MB)
seqwrite: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 4 (f=4): [W(4)] [100.0% done] [0KB/548.6MB/0KB /s] [0/17.6K/0 iops] [eta 00m:00s]
seqwrite: (groupid=0, jobs=4): err= 0: pid=20382: Sun Jun 2 23:16:05 2019
write: io=8192.0MB, bw=582340KB/s, iops=18198, runt= 14405msec
slat (usec): min=19, max=16814, avg=46.25, stdev=131.23
clat (usec): min=1, max=25216, avg=166.66, stdev=289.13
lat (usec): min=94, max=25264, avg=213.96, stdev=320.21
clat percentiles (usec):
| 1.00th=[ 3], 5.00th=[ 79], 10.00th=[ 86], 20.00th=[ 102],
| 30.00th=[ 114], 40.00th=[ 129], 50.00th=[ 141], 60.00th=[ 149],
| 70.00th=[ 157], 80.00th=[ 177], 90.00th=[ 219], 95.00th=[ 274],
| 99.00th=[ 740], 99.50th=[ 1320], 99.90th=[ 4192], 99.95th=[ 5792],
| 99.99th=[10176]
bw (KB /s): min=101248, max=171264, per=25.16%, avg=146508.38, stdev=13615.36
lat (usec) : 2=0.05%, 4=1.09%, 10=0.11%, 20=0.03%, 50=0.50%
lat (usec) : 100=16.41%, 250=75.38%, 500=4.70%, 750=0.77%, 1000=0.31%
lat (msec) : 2=0.36%, 4=0.19%, 10=0.10%, 20=0.01%, 50=0.01%
cpu : usr=6.37%, sys=13.15%, ctx=318518, majf=0, minf=104
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=262144/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=8192.0MB, aggrb=582340KB/s, minb=582340KB/s, maxb=582340KB/s, mint=14405msec, maxt=14405msec
Disk stats (read/write):
dm-96: ios=0/261408, merge=0/0, ticks=0/33051, in_queue=33220, util=93.85%, aggrios=0/262308, aggrmerge=0/0, aggrticks=0/32202, aggrin_queue=32358, aggrutil=92.82%
dm-95: ios=0/262308, merge=0/0, ticks=0/32202, in_queue=32358, util=92.82%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/262308, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=0/262308, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/131082, aggrmerge=0/72, aggrticks=0/15699, aggrin_queue=7571, aggrutil=41.28%
nvme0n1: ios=0/131074, merge=0/0, ticks=0/15908, in_queue=7688, util=41.28%
nvme1n1: ios=0/131091, merge=0/145, ticks=0/15491, in_queue=7454, util=40.44%
Code: Select all
[/share/CACHEDEV2_DATA] # fio --name=randread --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=16 --size=1G --runtime=600 --group_reporting
randread: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 16 processes
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
randread: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 6 (f=6): [_(3),r(1),_(2),r(1),_(2),r(1),_(1),r(2),_(2),r(1)] [100.0% done] [332.2MB/0KB/0KB /s] [42.6K/0/0 iops] [eta 00m:00s]
randread: (groupid=0, jobs=16): err= 0: pid=27236: Sun Jun 2 23:20:04 2019
read : io=16384MB, bw=560755KB/s, iops=70094, runt= 29919msec
slat (usec): min=10, max=34227, avg=20.51, stdev=85.64
clat (usec): min=1, max=43204, avg=192.48, stdev=254.52
lat (usec): min=43, max=43217, avg=213.89, stdev=270.04
clat percentiles (usec):
| 1.00th=[ 91], 5.00th=[ 104], 10.00th=[ 111], 20.00th=[ 121],
| 30.00th=[ 131], 40.00th=[ 139], 50.00th=[ 151], 60.00th=[ 167],
| 70.00th=[ 189], 80.00th=[ 223], 90.00th=[ 278], 95.00th=[ 346],
| 99.00th=[ 716], 99.50th=[ 1288], 99.90th=[ 3504], 99.95th=[ 4640],
| 99.99th=[ 8640]
bw (KB /s): min=20496, max=52224, per=6.53%, avg=36594.84, stdev=5997.35
lat (usec) : 2=0.03%, 4=0.41%, 10=0.03%, 20=0.02%, 50=0.04%
lat (usec) : 100=2.38%, 250=82.84%, 500=12.37%, 750=0.94%, 1000=0.28%
lat (msec) : 2=0.35%, 4=0.24%, 10=0.06%, 20=0.01%, 50=0.01%
cpu : usr=3.32%, sys=9.10%, ctx=2180044, majf=0, minf=520
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=2097152/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=16384MB, aggrb=560754KB/s, minb=560754KB/s, maxb=560754KB/s, mint=29919msec, maxt=29919msec
Disk stats (read/write):
dm-96: ios=2096095/19, merge=0/0, ticks=270611/0, in_queue=273411, util=99.58%, aggrios=2097152/19, aggrmerge=0/0, aggrticks=265253/0, aggrin_queue=268605, aggrutil=99.30%
dm-95: ios=2097152/19, merge=0/0, ticks=265253/0, in_queue=268605, util=99.30%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=2097152/19, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=2097152/19, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1048576/7, aggrmerge=0/2, aggrticks=131210/0, aggrin_queue=105271, aggrutil=93.75%
nvme0n1: ios=1048576/0, merge=0/0, ticks=131512/0, in_queue=105378, util=93.75%
nvme1n1: ios=1048576/14, merge=0/5, ticks=130909/0, in_queue=105165, util=93.39%
Code: Select all
[/share/CACHEDEV2_DATA] # fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=64k --numjobs=8 --size=512m --runtime=600 --group_reporting
randwrite: (g=0): rw=randwrite, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
Jobs: 8 (f=7): [w(8)] [100.0% done] [0KB/682.1MB/0KB /s] [0/10.1K/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=8): err= 0: pid=5497: Sun Jun 2 23:22:44 2019
write: io=4096.0MB, bw=699750KB/s, iops=10933, runt= 5994msec
slat (usec): min=26, max=15193, avg=58.45, stdev=97.37
clat (usec): min=3, max=17323, avg=654.88, stdev=515.79
lat (usec): min=162, max=17462, avg=714.05, stdev=524.75
clat percentiles (usec):
| 1.00th=[ 147], 5.00th=[ 247], 10.00th=[ 278], 20.00th=[ 358],
| 30.00th=[ 446], 40.00th=[ 524], 50.00th=[ 612], 60.00th=[ 676],
| 70.00th=[ 748], 80.00th=[ 836], 90.00th=[ 940], 95.00th=[ 1064],
| 99.00th=[ 2800], 99.50th=[ 3888], 99.90th=[ 6752], 99.95th=[ 7456],
| 99.99th=[12992]
bw (KB /s): min=77056, max=100736, per=12.58%, avg=88001.40, stdev=4983.58
lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 100=0.01%, 250=5.39%
lat (usec) : 500=32.13%, 750=32.20%, 1000=23.34%
lat (msec) : 2=5.31%, 4=1.14%, 10=0.45%, 20=0.01%
cpu : usr=3.98%, sys=6.21%, ctx=67422, majf=0, minf=239
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=65536/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=699750KB/s, minb=699750KB/s, maxb=699750KB/s, mint=5994msec, maxt=5994msec
Disk stats (read/write):
dm-96: ios=0/63345, merge=0/0, ticks=0/34440, in_queue=34597, util=97.57%, aggrios=0/65752, aggrmerge=0/0, aggrticks=0/35084, aggrin_queue=35202, aggrutil=96.97%
dm-95: ios=0/65752, merge=0/0, ticks=0/35084, in_queue=35202, util=96.97%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/65752, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=0/65752, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/32773, aggrmerge=0/103, aggrticks=0/17418, aggrin_queue=13642, aggrutil=80.26%
nvme0n1: ios=0/32773, merge=0/83, ticks=0/17588, in_queue=13808, util=80.26%
nvme1n1: ios=0/32773, merge=0/124, ticks=0/17249, in_queue=13476, util=79.69%
Code: Select all
[/share/CACHEDEV2_DATA] # fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=64k --numjobs=8 --size=512m --runtime=600 --group_reporting
randrw: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 8 processes
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
randrw: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 8 (f=8): [m(8)] [100.0% done] [458.8MB/51059KB/0KB /s] [29.4K/3191/0 iops] [eta 00m:00s]
randrw: (groupid=0, jobs=8): err= 0: pid=16898: Sun Jun 2 23:28:34 2019
read : io=7354.2MB, bw=480423KB/s, iops=30026, runt= 15675msec
slat (usec): min=11, max=14991, avg=18.97, stdev=56.80
clat (usec): min=1, max=16844, avg=224.90, stdev=318.29
lat (usec): min=73, max=16864, avg=244.54, stdev=323.73
clat percentiles (usec):
| 1.00th=[ 117], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 145],
| 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 177],
| 70.00th=[ 191], 80.00th=[ 211], 90.00th=[ 258], 95.00th=[ 370],
| 99.00th=[ 1704], 99.50th=[ 1928], 99.90th=[ 3792], 99.95th=[ 4960],
| 99.99th=[10176]
bw (KB /s): min=48512, max=70144, per=12.64%, avg=60713.13, stdev=4570.29
write: io=857984KB, bw=54736KB/s, iops=3420, runt= 15675msec
slat (usec): min=14, max=5240, avg=27.06, stdev=46.93
clat (usec): min=1, max=10871, avg=101.16, stdev=185.35
lat (usec): min=59, max=10913, avg=128.91, stdev=192.29
clat percentiles (usec):
| 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 54],
| 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 81],
| 70.00th=[ 94], 80.00th=[ 116], 90.00th=[ 155], 95.00th=[ 203],
| 99.00th=[ 446], 99.50th=[ 772], 99.90th=[ 2832], 99.95th=[ 3440],
| 99.99th=[ 7072]
bw (KB /s): min= 5280, max= 8352, per=12.67%, avg=6935.60, stdev=626.39
lat (usec) : 2=0.01%, 4=0.09%, 10=0.01%, 20=0.01%, 50=0.96%
lat (usec) : 100=6.56%, 250=82.12%, 500=6.68%, 750=0.76%, 1000=0.84%
lat (msec) : 2=1.53%, 4=0.35%, 10=0.07%, 20=0.01%
cpu : usr=3.30%, sys=9.40%, ctx=538043, majf=0, minf=240
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=470664/w=53624/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: io=7354.2MB, aggrb=480422KB/s, minb=480422KB/s, maxb=480422KB/s, mint=15675msec, maxt=15675msec
WRITE: io=857984KB, aggrb=54735KB/s, minb=54735KB/s, maxb=54735KB/s, mint=15675msec, maxt=15675msec
Disk stats (read/write):
dm-96: ios=470587/53623, merge=0/0, ticks=87749/3343, in_queue=91518, util=98.62%, aggrios=470664/53633, aggrmerge=0/0, aggrticks=86736/3210, aggrin_queue=90421, aggrutil=98.38%
dm-95: ios=470664/53633, merge=0/0, ticks=86736/3210, in_queue=90421, util=98.38%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=470664/53633, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
md2: ios=470664/53633, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=235332/26817, aggrmerge=0/0, aggrticks=43304/1595, aggrin_queue=34579, aggrutil=76.16%
nvme0n1: ios=235529/26622, merge=0/1, ticks=43562/1514, in_queue=34518, util=76.16%
nvme1n1: ios=235135/27013, merge=0/0, ticks=43046/1676, in_queue=34641, util=71.92%