Horrible Performance

Discussion on setting up QNAP NAS products.
User avatar
Toxic17
Ask me anything
Posts: 6469
Joined: Tue Jan 25, 2011 11:41 pm
Location: Planet Earth
Contact:

Re: Horrible Performance

Post by Toxic17 »

and that the S version 10GbE port runs at 2x4 and not 2x2 as the P version does.
Regards Simon

Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following


NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: Horrible Performance

Post by Trexx »

Correct as the Sata controller needs less bandwidth.


Sent from my iPhone using Tapatalk
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
User avatar
spikemixture
Been there, done that
Posts: 890
Joined: Wed Mar 07, 2018 11:04 pm
Location: 3rd World

Re: Horrible Performance

Post by spikemixture »

Trexx wrote: Sun Aug 04, 2019 3:48 am Correct as the Sata controller needs less bandwidth.


Sent from my iPhone using Tapatalk
I wonder how they perform in the lab.

The NVME is faster but is being restricted by the 10GBE using bandwidth
The Sata version is slower than the NVME but doesn't have something else using resources.

So then which is faster out here in the real world ?
Qnap TS-1277 1700 (48gb RAM) 8x10TB WD White,- Raid5, 2x M.2 Crucial 1TB (Raid 1 VM),
2x SSD 860 EVO 500gb (Raid1 QTS), 2x SSD 860 EVO 250GB (Cache), 2x M.2 PCIe 970 500gb NVME (Raid1 Plex and Emby server)
GTX 1050 TI
Qnap TVS-1282 i7 (32GB RAM) 6x8TB WD White - JBOD, 2x M.2 Crucial 500gb (Raid1 VM),
2x SSD EVO 500gb (Raid1 QTS), 2x SSD EVO 250gb (Raid1 Cache), 2x M.2 PCIe Intel 512GB NVME (Raid1-Servers)
Synology -1817+ - DOA
Drobo 5n - 5x4TB Seagate, - Drobo Raid = 15TB
ProBox 8 Bay USB3 - 49TB mixed drives - JBOD
All software is updated asap.
I give my opinion from my experience i.e. I have (or had) that piece of equipment/software and used it! :roll:
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: Horrible Performance

Post by Trexx »

People wanting speed don’t use the combo cards. They use the new PCIe 3.x NVMe only cards.


Sent from my iPhone using Tapatalk
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
User avatar
spikemixture
Been there, done that
Posts: 890
Joined: Wed Mar 07, 2018 11:04 pm
Location: 3rd World

Re: Horrible Performance

Post by spikemixture »

Trexx wrote: Tue Aug 06, 2019 9:40 pm People wanting speed don’t use the combo cards. They use the new PCIe 3.x NVMe only cards.


Sent from my iPhone using Tapatalk
I would love those "people" to give me some actual numbers.
Does having cards that do one thing actually show in performance in the real world!?
The answer should be yes but 50% faster? 25% ? 10%?

I just want to get more than 60MB/s backing up from Qnap to Synology using HBSV3 on a 10GBE network

After buying an 8bay Probox and getting about 120MB/s copying the same media I am reassessing my backup procedure!
Qnap TS-1277 1700 (48gb RAM) 8x10TB WD White,- Raid5, 2x M.2 Crucial 1TB (Raid 1 VM),
2x SSD 860 EVO 500gb (Raid1 QTS), 2x SSD 860 EVO 250GB (Cache), 2x M.2 PCIe 970 500gb NVME (Raid1 Plex and Emby server)
GTX 1050 TI
Qnap TVS-1282 i7 (32GB RAM) 6x8TB WD White - JBOD, 2x M.2 Crucial 500gb (Raid1 VM),
2x SSD EVO 500gb (Raid1 QTS), 2x SSD EVO 250gb (Raid1 Cache), 2x M.2 PCIe Intel 512GB NVME (Raid1-Servers)
Synology -1817+ - DOA
Drobo 5n - 5x4TB Seagate, - Drobo Raid = 15TB
ProBox 8 Bay USB3 - 49TB mixed drives - JBOD
All software is updated asap.
I give my opinion from my experience i.e. I have (or had) that piece of equipment/software and used it! :roll:
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: Horrible Performance

Post by Trexx »

spikemixture wrote: Wed Aug 07, 2019 6:48 pm I would love those "people" to give me some actual numbers.
Does having cards that do one thing actually show in performance in the real world!?
The answer should be yes but 50% faster? 25% ? 10%?

I just want to get more than 60MB/s backing up from Qnap to Synology using HBSV3 on a 10GBE network

After buying an 8bay Probox and getting about 120MB/s copying the same media I am reassessing my backup procedure!
The benefit of the dedicated QM2 card isn't that it is a dedicated cards, although since you aren't sharing the bw between devices on the card there is some benefit from that.

The benefit is from using a PCIe 3.x bus vs. 2.x. PCIe 3.x slots (assuming same number of lanes) have 2x the throughput of PCIe 2.x. Most GOOD NVMe SSD's will peak around 3500MB/s (depending on workload type) in a PCIe 3 x 4 config. A PCIe 2.0 x 4 card with the same NVMe will top out at about 1,900 MB/s or so. https://en.wikipedia.org/wiki/PCI_Express . Of course things like garbage collection, data block size, caching , etc. impact things as well.

Search the forums as I recall there was a post a while back where someone did benchmark NVMe in the older and newer QM2 cards.
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
User avatar
spikemixture
Been there, done that
Posts: 890
Joined: Wed Mar 07, 2018 11:04 pm
Location: 3rd World

Re: Horrible Performance

Post by spikemixture »

Trexx wrote: Thu Aug 08, 2019 3:26 am
spikemixture wrote: Wed Aug 07, 2019 6:48 pm I would love those "people" to give me some actual numbers.
Does having cards that do one thing actually show in performance in the real world!?
The answer should be yes but 50% faster? 25% ? 10%?

I just want to get more than 60MB/s backing up from Qnap to Synology using HBSV3 on a 10GBE network

After buying an 8bay Probox and getting about 120MB/s copying the same media I am reassessing my backup procedure!
The benefit of the dedicated QM2 card isn't that it is a dedicated cards, although since you aren't sharing the bw between devices on the card there is some benefit from that.

The benefit is from using a PCIe 3.x bus vs. 2.x. PCIe 3.x slots (assuming same number of lanes) have 2x the throughput of PCIe 2.x. Most GOOD NVMe SSD's will peak around 3500MB/s (depending on workload type) in a PCIe 3 x 4 config. A PCIe 2.0 x 4 card with the same NVMe will top out at about 1,900 MB/s or so. https://en.wikipedia.org/wiki/PCI_Express . Of course things like garbage collection, data block size, caching , etc. impact things as well.

Search the forums as I recall there was a post a while back where someone did benchmark NVMe in the older and newer QM2 cards.
Thanks but I am not really concerned about NVME vs Sata as I feel no-one in everyday use would actually see/feel any difference.

This, for me, is all about getting data from my Qnap to my Synology a helluva lot faster than the 50MB/s I sometimes get.

My best is from qnap to USB3 ..
hbs4Capture.JPG
You do not have the required permissions to view the files attached to this post.
Qnap TS-1277 1700 (48gb RAM) 8x10TB WD White,- Raid5, 2x M.2 Crucial 1TB (Raid 1 VM),
2x SSD 860 EVO 500gb (Raid1 QTS), 2x SSD 860 EVO 250GB (Cache), 2x M.2 PCIe 970 500gb NVME (Raid1 Plex and Emby server)
GTX 1050 TI
Qnap TVS-1282 i7 (32GB RAM) 6x8TB WD White - JBOD, 2x M.2 Crucial 500gb (Raid1 VM),
2x SSD EVO 500gb (Raid1 QTS), 2x SSD EVO 250gb (Raid1 Cache), 2x M.2 PCIe Intel 512GB NVME (Raid1-Servers)
Synology -1817+ - DOA
Drobo 5n - 5x4TB Seagate, - Drobo Raid = 15TB
ProBox 8 Bay USB3 - 49TB mixed drives - JBOD
All software is updated asap.
I give my opinion from my experience i.e. I have (or had) that piece of equipment/software and used it! :roll:
User avatar
Toxic17
Ask me anything
Posts: 6469
Joined: Tue Jan 25, 2011 11:41 pm
Location: Planet Earth
Contact:

Re: Horrible Performance

Post by Toxic17 »

spikemixture wrote: Fri Aug 09, 2019 11:03 am
My best is from qnap to USB3 ..

hbs4Capture.JPG
Unfortunately you might not get much faster "reported" throughput from HBS logs. Especially if you compare them to you maximum throughput tests which are not the same as a backup and the you have 1000's of small files.

let me explain...

There are so many processes running when a backup task initiates which effect the overall throughput speed that are logged/calculated.

As the back starts so does the timer. Then a snapshot of the backup files is taken. note - this is on by default, you can turn that part off if your want to. A snapshot is a backup at the precise time it is run, rather than sending live files to the backup nas that may change in the background whilst the backup is running. (afaik)

at this stage, no files have been sent. yet the timer is running, which in turn affects the throughput speed of the backup report.

once the snapshot is done, Files are checked both ends for file modifications dates and file sizes. This process will be run on both NAS's. Then and only then can the files be sent tot he backup NAS.

I guess tweaking the backup filters/schedules etc you may be able to cut down on overall time of your backup tasks, especially using QuDedupe.

In my case, 90% of my backups are run in the quiet hours so they are not affected by any other NAS tasks. They are scheduled so they do not overlap each other. Make sure RAID scrubbing and other HD checks are ran outside of the backup tasks as this would impact on throughput.

I ahve also broken down the backup tasks to daily, weekly, monthly backups depending on the type of data and whether it changes or not. I also have 2 other tasks, one that has no schedule (a pure backup once plan for some data/ISO's) and a web folder backup which uses versioning and backs up several time a day.

Since most of my backups are run OOH's or when I am not home, I personally couldn't care what speed they are, as long as the backups complete, and restore can be achieved without impacting on my life or me worrying about how long they took.
Regards Simon

Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following


NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
User avatar
Toxic17
Ask me anything
Posts: 6469
Joined: Tue Jan 25, 2011 11:41 pm
Location: Planet Earth
Contact:

Re: Horrible Performance

Post by Toxic17 »

BTW, can you run in SSH on the QNAP the following. what does the results give?

[/] # qcli_storage -T force=1

and

[/] # qcli_storage -t force=1

I would also try asking in the synology forums if there is an equivalent command that gives a similar RAID performance throughput.

My TS-463 with 3TD WD slow REDs

Code: Select all

[/] # qcli_storage -T force=1
fio test command for physical disk: /sbin/fio --filename=test_device --direct=1 --rw=read --bs=1M --runtime=15 --name=test-rea
fio test command for RAID: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioeng
Start testing!
Performance test is finished 100.000%...
Enclosure  Port  Sys_Name      Throughput    RAID        RAID_Type    RAID_Throughput   Pool
NAS_HOST   1     /dev/sdb      530.54 MB/s   --          --           --                --
NAS_HOST   2     /dev/sda      526.96 MB/s   --          --           --                --
NAS_HOST   3     /dev/sde      144.00 MB/s   /dev/md1    RAID 5       408.18 MB/s       1
NAS_HOST   4     /dev/sdf      141.36 MB/s   /dev/md1    RAID 5       408.18 MB/s       1
NAS_HOST   5     /dev/sdc      144.52 MB/s   /dev/md1    RAID 5       408.18 MB/s       1
NAS_HOST   6     /dev/sdd      145.18 MB/s   /dev/md1    RAID 5       408.18 MB/s       1
NAS_HOST   P2-1  /dev/nvme0n1  1.42 GB/s     /dev/md2    RAID 1       1.60 GB/s         1
NAS_HOST   P2-2  /dev/nvme1n1  1.40 GB/s     /dev/md2    RAID 1       1.60 GB/s         1

Code: Select all

[/] # qcli_storage -t force=1
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID   VolName             Pool     Mapping_Name            Throughput      Mount_Path                    FS_Throughput
1       DataVol1            1        /dev/mapper/cachedev1   523.98 MB/s     /share/CACHEDEV1_DATA         207.12 MB/s
ran the last one again...

Code: Select all

[/] # qcli_storage -t force=1
fio test command for LV layer: /sbin/fio --filename=test_device --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 &>/tmp/qcli_storage.log
fio test command for File system: /sbin/fio --filename=test_device/qcli_storage --direct=0 --rw=read --bs=1M --runtime=15 --name=test-read --ioengine=libaio --iodepth=32 --size=128m &>/tmp/qcli_storage.log
Start testing!
Performance test is finished 100.000%...
VolID   VolName             Pool     Mapping_Name            Throughput      Mount_Path                    FS_Throughput
1       DataVol1            1        /dev/mapper/cachedev1   519.00 MB/s     /share/CACHEDEV1_DATA         927.54 MB/s
Regards Simon

Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following


NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
User avatar
spikemixture
Been there, done that
Posts: 890
Joined: Wed Mar 07, 2018 11:04 pm
Location: 3rd World

Re: Horrible Performance

Post by spikemixture »

No success
I dislike SSH or any dos type inputs. Spaces are very important but never obvious
I tried this twice

[~] # [/]#qcli_storage -T force=1
-sh: [/]#qcli_storage: No such file or directory
[~] #
[~] # [/] #qcli_storage -T force=1
-sh: [/]: No such file or directory
[~] #

I think I am over this whole 10GBE thing
Qnap TS-1277 1700 (48gb RAM) 8x10TB WD White,- Raid5, 2x M.2 Crucial 1TB (Raid 1 VM),
2x SSD 860 EVO 500gb (Raid1 QTS), 2x SSD 860 EVO 250GB (Cache), 2x M.2 PCIe 970 500gb NVME (Raid1 Plex and Emby server)
GTX 1050 TI
Qnap TVS-1282 i7 (32GB RAM) 6x8TB WD White - JBOD, 2x M.2 Crucial 500gb (Raid1 VM),
2x SSD EVO 500gb (Raid1 QTS), 2x SSD EVO 250gb (Raid1 Cache), 2x M.2 PCIe Intel 512GB NVME (Raid1-Servers)
Synology -1817+ - DOA
Drobo 5n - 5x4TB Seagate, - Drobo Raid = 15TB
ProBox 8 Bay USB3 - 49TB mixed drives - JBOD
All software is updated asap.
I give my opinion from my experience i.e. I have (or had) that piece of equipment/software and used it! :roll:
User avatar
Toxic17
Ask me anything
Posts: 6469
Joined: Tue Jan 25, 2011 11:41 pm
Location: Planet Earth
Contact:

Re: Horrible Performance

Post by Toxic17 »

lol

the "[/] #" is my command prompt showing you where has what it is like 1000's of examples are like this.

the No such file or directory means it does not recognise what your typed in ie the "[/] #" which is not a linux command as it is a command prompt pointer only.

use

qcli_storage -T force=1

qcli_storage -t force=1
Regards Simon

Qnap Downloads
MyQNap.Org Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following


NAS: TS-673A QuTS hero h5.1.2.2534 • TS-121 4.3.3.2420 • APC Back-UPS ES 700G
Network: VM Hub3: 500/50 • UniFi UDM Pro: 3.2.9 • UniFi Network Controller: 8.0.28
USW-Aggregation: 6.6.61 • US-16-150W: 6.6.61 • 2x USW Mini Flex 2.0.0 • UniFi AC Pro 6.6.62 • UniFi U6-LR 6.6.62
UniFi Protect: 2.11.21/8TB Skyhawk AI • 3x G3 Instants: 4.69.55 • UniFi G3 Flex: 4.69.55 • UniFi G5 Flex: 4.69.55
User avatar
spikemixture
Been there, done that
Posts: 890
Joined: Wed Mar 07, 2018 11:04 pm
Location: 3rd World

Re: Horrible Performance

Post by spikemixture »

Toxic17 wrote: Sun Aug 11, 2019 5:48 pm
qcli_storage -T force=1

Performance test is finished 100.000%...
Enclosure Port Sys_Name Throughput RAID RAID_Type RAID_Throughput Pool
NAS_HOST 1 /dev/sda 490.87 MB/s /dev/md3 RAID 1 1006.00 MB/s 288
NAS_HOST 2 /dev/sdb 483.79 MB/s /dev/md3 RAID 1 1006.00 MB/s 288
NAS_HOST 3 /dev/sde 523.08 MB/s /dev/md1 RAID 1 1022.00 MB/s 1
NAS_HOST 4 /dev/sdc 518.16 MB/s /dev/md1 RAID 1 1022.00 MB/s 1
NAS_HOST 5 /dev/sdf 524.17 MB/s /dev/md4 RAID 1 1.00 GB/s 1
NAS_HOST 6 /dev/sdd 505.15 MB/s /dev/md4 RAID 1 1.00 GB/s 1
NAS_HOST 7 /dev/sdh 142.44 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST 8 /dev/sdg 161.08 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST 9 /dev/sdn 145.52 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST 10 /dev/sdm 132.08 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST 11 /dev/sdk 144.22 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST 12 /dev/sdl 158.96 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST 13 /dev/sdi 155.64 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST 14 /dev/sdj 177.09 MB/s /dev/md2 RAID 6 155.90 MB/s 2
NAS_HOST P2-1 /dev/nvme1n1 862.41 MB/s /dev/md6 RAID 1 1.67 GB/s 3
NAS_HOST P2-2 /dev/nvme0n1 862.14 MB/s /dev/md6 RAID 1 1.67 GB/s 3
[~] #


Performance test is finished 100.000%...
VolID VolName Pool Mapping_Name Throughput Mount_Path FS_Throughput
1 QTS 1 /dev/mapper/cachedev1 1012.00 MB/s /share/CACHEDEV1_DATA 969.70 MB/s
2 DATA 2 /dev/mapper/cachedev2 247.08 MB/s /share/CACHEDEV2_DATA 334.20 MB/s
3 VM 288 /dev/mapper/cachedev3 1006.00 MB/s /share/CACHEDEV3_DATA 859.06 MB/s
4 PLEX 3 /dev/mapper/cachedev5 1.66 GB/s /share/CACHEDEV5_DATA 1.34 GB/s
[~] #

1 is 2 xSSD Raid 1 and OS
2 is 8 X10TB Raid6 - Data
3. 2x M.2 SATA Raid1 - VM's
4. 2 x M.2 NVME Raid1 PMS
Qnap TS-1277 1700 (48gb RAM) 8x10TB WD White,- Raid5, 2x M.2 Crucial 1TB (Raid 1 VM),
2x SSD 860 EVO 500gb (Raid1 QTS), 2x SSD 860 EVO 250GB (Cache), 2x M.2 PCIe 970 500gb NVME (Raid1 Plex and Emby server)
GTX 1050 TI
Qnap TVS-1282 i7 (32GB RAM) 6x8TB WD White - JBOD, 2x M.2 Crucial 500gb (Raid1 VM),
2x SSD EVO 500gb (Raid1 QTS), 2x SSD EVO 250gb (Raid1 Cache), 2x M.2 PCIe Intel 512GB NVME (Raid1-Servers)
Synology -1817+ - DOA
Drobo 5n - 5x4TB Seagate, - Drobo Raid = 15TB
ProBox 8 Bay USB3 - 49TB mixed drives - JBOD
All software is updated asap.
I give my opinion from my experience i.e. I have (or had) that piece of equipment/software and used it! :roll:
Post Reply

Return to “Turbo Station Installation & Setup”