QTS Hero... ZFS? What? When?

Interested in our products? Post your questions here. Let us answer before you buy.
Locked
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: QTS Hero... ZFS? What? When?

Post by Trexx »

It depends on whether the backup NAS is running QTS Hero also.

So think of it this way....

You take 15TB of source data and inline dedupe/compress it down to 4TB of storage space (ZFS - system a).

Then you want to backup/replicate that data. If system b doesn’t have ZFS, you still have to store the 15TB of storage space.

If it does have ZFS, then system B would also be able to reduce it down to that 4TB (assuming sufficient memory, cpu, etc).

Now I would hope they have a different replication option for ZFS to ZFS (to eliminate the uncompress/recompress) but again I am just speculating.


Sent from my iPad using Tapatalk
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
Bob Zelin
Experience counts
Posts: 1375
Joined: Mon Nov 21, 2016 12:55 am
Location: Orlando, FL.
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Bob Zelin »

Hi Daniel -
just for the sake of clarity - I have not only 2 people, but I have 20 people editing off a QNAP like the TS-1683XU-RP at the same time with 4K video media - right now - with QTS 4.3.6.1070. And that is with spinning drives (and a Netgear XS748T switch). But of course, even MORE performance is needed, for even higher bandwidths. This is why I am excited about QTS HERO. I have NO PROBLEM buying 64 Gig of RAM (and it makes no sense why I can't put in larger ECC RAM chips than 16 Gig chips into products like the
TS-1683UX-PR and the TS-1677XU-RP, so I can have 128 Gig of RAM) -

I will certainly be trying this (and I will certainly be one of the first to complain !)

AND - if Moogle is a "newbie" - then most of us should just give up right now.

Bob Zelin
Bob Zelin / Rescue 1, Inc.
http://www.bobzelin.com
User avatar
Moogle Stiltzkin
Guru
Posts: 11448
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin »

Trexx wrote: Tue Oct 22, 2019 12:54 am It depends on whether the backup NAS is running QTS Hero also.

So think of it this way....

You take 15TB of source data and inline dedupe/compress it down to 4TB of storage space (ZFS - system a).

Then you want to backup/replicate that data. If system b doesn’t have ZFS, you still have to store the 15TB of storage space.
ah thats what i thought :( hopefully i upgrade to another new ZFS nas model down the road when my old models go EOL :wink:

But now daniel gave another reason why you'd still want to compress (and probably dedupe?) anyway. because of the increased performance. So might still be worth doing even if you still in the example need to backup 15tb still. hm.

Trexx wrote: Tue Oct 22, 2019 12:54 am If it does have ZFS, then system B would also be able to reduce it down to that 4TB (assuming sufficient memory, cpu, etc).

Now I would hope they have a different replication option for ZFS to ZFS (to eliminate the uncompress/recompress) but again I am just speculating.
i am also wondering about this part how moving data from a ZFS nas to another ZFS nas. does the destination NAS need to have 15tb to initially hold the uncompress data before it can then compress back and also dedupe to achieve the same savings. or can it on the fly do it, so you don't require 15tb of storage in order to receive the backup initially? i'm confused by this.

i found this article, but it's a bit old... i'm wondering if things have progressed since then or not
rsync.net: ZFS Replication to the cloud is finally here—and it’s fast
Even an rsync-lifer admits ZFS replication and rsync.net are making data transfers better.

JIM SALTER - 12/17/2015


A love affair with rsync

Revisiting a first love of any kind makes for a romantic trip down memory lane, and that's what revisiting rsync—as in "rsync.net"—feels like for me. It's hard to write an article that's inevitably going to end up trashing the tool, because I've been wildly in love with it for more than 15 years. Andrew Tridgell (of Samba fame) first announced rsync publicly in June of 1996. He used it for three chapters of his PhD thesis three years later, about the time that I discovered and began enthusiastically using it. For what it's worth, the earliest record of my professional involvement with major open source tools—at least that I've discovered—is my activity on the rsync mailing list in the early 2000s.

Rsync is a tool for synchronizing folders and/or files from one location to another. Adhering to true Unix design philosophy, it's a simple tool to use. There is no GUI, no wizard, and you can use it for the most basic of tasks without being hindered by its interface. But somewhat rare for any tool, in my experience, rsync is also very elegant. It makes a task which is humanly intuitive seem simple despite being objectively complex.

So far, this isn't much more than a kinda-nice version of copy. But where it gets interesting is when /target/folder already exists. In that case, rsync will compare each of those files in /source/folder with its counterpart in /target/folder, and it will only update the latter if the source has changed. This keeps everything in the target updated with the least amount of thrashing necessary. This is much cleaner than doing a brute-force copy of everything, changed or not!

When rsyncing remotely, rsync still looks over the list of files in the source and target locations, and the tool only messes with files that have changed. It gets even better still—rsync also tokenizes the changed files on each end and then exchanges the tokens to figure out which blocks in the files have changed. Rsync then only moves those individual blocks across the network. (Holy saved bandwidth, Batman!)

You can go further and further down this rabbit hole of "what can rsync do." Inline compression to save even more bandwidth? Check. A daemon on the server end to expose only certain directories or files, require authentication, only allow certain IPs access, or allow read-only access to one group but write access to another? You got it. Running "rsync" without any arguments gets you a "cheat sheet" of valid command line arguments several pages long.

To Windows-only admins whose eyes are glazing over by now: rsync is "kinda like robocopy" in the same way that you might look at a light saber and think it's "kinda like a sword."


If rsync's so great, why is ZFS replication even a thing?

This really is the million dollar question.
I hate to admit it, but I'd been using ZFS myself for something like four years before I realized the answer. In order to demonstrate how effective each technology is, let's go to the numbers. I'm using rsync.net's new ZFS replication service on the target end and a Linode VM on the source end. I'm also going to be using my own open source orchestration tool syncoid to greatly simplify the otherwise-tedious process of ZFS replication.


OK, ZFS is faster sometimes. Does it matter?

I have to be honest—I feel a little like a monster. Most casual users' experience of rsync will be "it rocks!" and "how could anything be better than this?" But after 15 years of daily use, I knew exactly what rsync's weaknesses were, and I targeted them ruthlessly.

As for ZFS replication's weaknesses, well, it really only has one: you need to be using ZFS on both ends. On the one hand, I think you should already want ZFS on both ends. There's a giant laundry list of features you can only get with a next-generation filesystem. But you could easily find yourself stuck with a lesser filesystem—and if you're stuck, you're stuck. No ZFS, no ZFS replication.

Aside from that, ZFS replication ranges from "just as fast as anything else" to "noticeably faster than anything else" to "sit down, shut up, and hold on." The particular use case that drove me to finally exploring replication—which was much, much more daunting before tools like syncoid automated it—was the replication of VM images.

Virtualization keeps getting more and more prevalent, and VMs mean gigantic single files. rsync has a lot of trouble with these. The tool can save you network bandwidth when synchronizing a huge file with only a few changes, but it can't save you disk bandwidth, since rsync needs to read through and tokenize the entire file on both ends before it can even begin moving data across the wire. This was enough to be painful, even on our little 8GB test file. On a two terabyte VM image, it turns into a complete non-starter. I can (and do!) sync a two terabyte VM image daily (across a 5mbps Internet connection) usually in well under an hour. Rsync would need about seven hours just to tokenize those files before it even began actually synchronizing them... and it would render the entire system practically unusable while it did, since it would be greedily reading from the disks at maximum speed in order to do so.

The moral of the story? Replication definitely matters.
https://arstechnica.com/information-tec ... -its-fast/


Digging into the new features in OpenZFS post-Linux migration
JIM SALTER - 6/20/2019


ZFS on Linux 0.8 (ZoL) brought tons of new features and performance improvements when it was released on May 23. They came after Delphix announced that it was migrating its own product to Linux back in March 2018. We'll go over some of the most exciting May features (like ZFS native encryption) here today.

For the full list—including both new features and performance improvements not covered here—you can visit the ZoL 0.8.0 release on Github. (Note that ZoL 0.8.1 was released last week, but since ZFS on Linux follows semantic versioning, it's a bugfix release only.)

Unfortunately for Ubuntu fans, these new features won't show up in Canonical's repositories for quite some time—October 2019's forthcoming interim release, Eoan Ermine, is still showing 0.7.12 in its repos. We can hope that Ubuntu 20.04 LTS (which has yet to be named) will incorporate the 0.8.x branch, but there's no official word so far; if you're running Ubuntu 18.04 (or later) and absolutely cannot wait, the widely-used Jonathon F PPA has 0.8.1 available. Debian has 0.8.0 in its experimental repo, Arch Linux has 0.8.1 in its zfs-dkms AUR package, and Gentoo has 0.8.1 in testing at sys-fs/zfs. Users of other Linux distributions can find instructions for building packages directly from master at https://zfsonlinux.org/.

That aforementioned Linux migration added Delphix's impressive array of OpenZFS developers to the large contingent already working on ZFS on Linux. In November, the FreeBSD project announced its acknowledgment of the new de facto primacy of Linux as the flagship development platform for OpenZFS. FreeBSD did so by rebasing its own OpenZFS codebase on ZFS on Linux rather than Illumos. In even better news for BSD fans, the porting efforts necessary will be adopted into the main codebase of ZFS on Linux itself, with PRs being merged from FreeBSD's new ZoL fork as work progresses.

The last few months have been extremely busy for ZFS on Linux—and by extension, the entire OpenZFS project. Historically, the majority of new OpenZFS development was done by employees working at Delphix, who in turn used Illumos as their platform of choice. From there, new code was ported relatively quickly to FreeBSD and somewhat more slowly to Linux.

But over the years, momentum built up for the ZFS on Linux project. The stream of improvements and bugfixes reversed course—almost all of the really exciting new features debuting in 0.8 originated in Linux, instead of being ported in from elsewhere.



Let's dig into the most important stuff.

ZFS native encryption
One of the most important new features in 0.8 is Native ZFS Encryption. Until now, ZFS users have relied on OS-provided encrypted filesystem layers either above or below ZFS. While this approach does work, it presented difficulties—encryption (GELI or LUKS) below the ZFS layer decreases ZFS' native ability to assure data safety. Meanwhile, encryption above the ZFS layer (GELI or LUKS volumes created on ZVOLs) makes ZFS native compression (which tends to increase both performance and usable storage space when enabled) impossible.

The utility of native encryption doesn't stop with better integration and ease-of-use for encrypted filesystems, though; the feature also comes with raw encrypted ZFS replication. When you've encrypted a ZFS filesystem natively, it's possible to replicate that filesystem intact to a remote ZFS pool without ever decrypting (or decompressing) the data—and without the remote system ever needing to be in possession of the key that can decrypt it.

This feature, in turn, means that one could use ZFS replication to keep an untrusted remote backup system up to date.
This makes it impossible—even for an attacker who's got root and/or physical access on the remote system—to steal the data being backed up there.


ZFS device removal
Among the most common complaints of ZFS hobbyists is that, if you bobble a command to add new disks to an existing ZFS pool, you can't undo it. You're stuck with a pool that includes single-disk vdevs and has effectively no parity or redundancy.

In the past, the only mitigation was to attach more disks to the new single-disk vdevs, upgrading them to mirrors; this might not be so bad if you're working with a pool of mirrors in the first place. But it's cold comfort if your pool is based on RAIDz (striped) vdevs—or if you're just plain out of money and/or bays for new disks.

Beginning with 0.8.0, device removal is possible in a limited number of scenarios with a new zpool remove command. A word to the wise, however—device removal isn't trivial, and it shouldn't be done lightly. A pool which has devices removed ends up with what amounts to CNAMEs for the missing storage blocks; filesystem calls referencing blocks originally stored on the removed disks end up first looking for the original block, then being redirected to the blocks' new locations. This should have relatively little impact on a device mistakenly added and immediately removed, but it could have serious performance implications if used to remove devices with many thousands of used blocks.

TRIM support in ZFS
One of the longest-standing complaints about ZFS on Linux is its lack of TRIM support for SSDs. Without TRIM, the performance of an SSD degrades significantly over time—after several years of unTRIMmed hard use, an SSD can easily be down to 1/3 or less of its original performance.

If your point of comparison is conventional hard disks, this doesn't matter too much; a good SSD will typically have five or six times the throughput and 10,000 times the IOPS of even a very fast rust disk. So what's a measly 67% penalty among friends? But if you're banking on the system's as-provisioned performance, you're in trouble.

Luckily, 0.8 brings support for both manual and automatic TRIM to ZFS. Most users and administrators will want to use the autotrim pool property to enable automatic, real-time TRIM support; extremely performance-sensitive systems with windows of less storage use may elect instead to schedule regular TRIM tasks during off hours with zpool trim.

ZFS pool checkpoints
Checkpoints aren't as glamorous as the features we've already mentioned, but they can certainly save your bacon. Think of a checkpoint as something like a pool-wide snapshot. But where a snapshot preserves the state of a single dataset or ZVOL, a checkpoint preserves the state of the entire pool.

If you're about to enable a new feature flag that changes on-disk format (which would normally be irreversible), you might first zpool checkpoint the pool, allowing you to roll it back to the pre-upgrade condition. Checkpoints can also be used to roll back otherwise-irreversible dataset or zvol level operations, such as destroy. Accidentally zfs destroy an entire dataset, when you only meant to destroy one of its snapshots? If you've got a checkpoint, you can roll that action back.

https://arstechnica.com/gadgets/2019/06 ... xes-0-8-1/
NAS
[Main Server] QNAP TS-877 (QTS) w. 4tb [ 3x HGST Deskstar NAS & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A (Truenas Core) w. 4x 2TB Samsung F3 (HD203WI) RaidZ1 ZFS + 8gb ddr3 Crucial
[^] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) 2x 4TB Seagate Ironwolf, Raid5
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-253D (Truenas Scale)
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Qotom Pfsense|100mbps FTTH | Win11, Ryzen 5600X Desktop (1x2tb Crucial P50 Plus M.2 SSD, 1x 8tb seagate Ironwolf,1x 4tb HGST Ultrastar 7K4000)


Resources
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin
QNAPDanielFL
Easy as a breeze
Posts: 488
Joined: Fri Mar 31, 2017 7:09 am

Re: QTS Hero... ZFS? What? When?

Post by QNAPDanielFL »

"i am also wondering about this part how moving data from a ZFS nas to another ZFS nas. does the destination NAS need to have 15tb to initially hold the uncompress data before it can then compress back and also dedupe to achieve the same savings. or can it on the fly do it, so you don't require 15tb of storage in order to receive the backup initially?"

Compressing and deduplication happens on the fly. Data goes into the RAM, and while it is in the RAM, it is compressed and deduplicated before it is written to the drives. So you would not need the full 15TB of storage on the backup NAS in this example.
User avatar
Moogle Stiltzkin
Guru
Posts: 11448
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin »

oo thats pretty kewl then. so it's best case scenario for a zfs nas backup to another zfs nas. pretty awesome :D
NAS
[Main Server] QNAP TS-877 (QTS) w. 4tb [ 3x HGST Deskstar NAS & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A (Truenas Core) w. 4x 2TB Samsung F3 (HD203WI) RaidZ1 ZFS + 8gb ddr3 Crucial
[^] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) 2x 4TB Seagate Ironwolf, Raid5
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-253D (Truenas Scale)
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Qotom Pfsense|100mbps FTTH | Win11, Ryzen 5600X Desktop (1x2tb Crucial P50 Plus M.2 SSD, 1x 8tb seagate Ironwolf,1x 4tb HGST Ultrastar 7K4000)


Resources
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: QTS Hero... ZFS? What? When?

Post by Trexx »

Moogle Stiltzkin wrote:oo thats pretty kewl then. so it's best case scenario for a zfs nas backup to another zfs nas. pretty awesome :D
Well somewhat best case at that means you still have to move 15TB across the NW vs say 4TB (compressed/dedup’d version)


Sent from my iPhone using Tapatalk
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
User avatar
storageman
Ask me anything
Posts: 5507
Joined: Thu Sep 22, 2011 10:57 pm

Re: QTS Hero... ZFS? What? When?

Post by storageman »

Has to be dedupable/compressable data - not all data is.

So before you kill performance to try it make sure it's the right kind of data.
Lots of the same VMs dedupe quite nicely for example. Graphics not so much.

Over on the HBS 3 threads there's still a lot of bugs - are these going to fixed first before launching ZFS?
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: QTS Hero... ZFS? What? When?

Post by Trexx »

My guess is that QTS Hero is likely leveraging openzfs, where as HBS3 is likely using something completely different.


Sent from my iPhone using Tapatalk
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
User avatar
storageman
Ask me anything
Posts: 5507
Joined: Thu Sep 22, 2011 10:57 pm

Re: QTS Hero... ZFS? What? When?

Post by storageman »

Trexx wrote: Thu Oct 24, 2019 1:28 am My guess is that QTS Hero is likely leveraging openzfs, where as HBS3 is likely using something completely different.


Sent from my iPhone using Tapatalk
You've lost me. Two unrelated points.
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: QTS Hero... ZFS? What? When?

Post by Trexx »

storageman wrote: Thu Oct 24, 2019 3:46 pm You've lost me. Two unrelated points.
I thought you were implying they need to fix the dedup in HBS3 prior to QTS hero. My point was completely different dedup technology/code bases.
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
User avatar
storageman
Ask me anything
Posts: 5507
Joined: Thu Sep 22, 2011 10:57 pm

Re: QTS Hero... ZFS? What? When?

Post by storageman »

Trexx wrote: Fri Oct 25, 2019 5:34 am
storageman wrote: Thu Oct 24, 2019 3:46 pm You've lost me. Two unrelated points.
I thought you were implying they need to fix the dedup in HBS3 prior to QTS hero. My point was completely different dedup technology/code bases.
Ah ok, but ZFS dedup is native, Qudedup is not, just for backups.
Need to speak by customers about dedup in 1686/1640DC,is it working now properly? Were problems with dedup table.
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: QTS Hero... ZFS? What? When?

Post by Trexx »

Here is the QTS Hero TechDay deck (in FR) that Qoolbox posted over in the French forums.

http://www.positiv-it.fr/PREZ/techdays2 ... 8FR%29.pdf
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
User avatar
Moogle Stiltzkin
Guru
Posts: 11448
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: QTS Hero... ZFS? What? When?

Post by Moogle Stiltzkin »

fast raid rebuild (reslivering?), checksum for data corruption automatically (file system check no longer required?), space savings features (dedup which requires a lot of ram)

that worm feature also sounds good, but in what situation would you use that? is that suitable for use in a backup NAS ? so everytime you require a backup, you wipe it all, then do a backup from scratch that is only writable once?
NAS
[Main Server] QNAP TS-877 (QTS) w. 4tb [ 3x HGST Deskstar NAS & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A (Truenas Core) w. 4x 2TB Samsung F3 (HD203WI) RaidZ1 ZFS + 8gb ddr3 Crucial
[^] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) 2x 4TB Seagate Ironwolf, Raid5
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-253D (Truenas Scale)
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Qotom Pfsense|100mbps FTTH | Win11, Ryzen 5600X Desktop (1x2tb Crucial P50 Plus M.2 SSD, 1x 8tb seagate Ironwolf,1x 4tb HGST Ultrastar 7K4000)


Resources
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin
QNAPDanielFL
Easy as a breeze
Posts: 488
Joined: Fri Mar 31, 2017 7:09 am

Re: QTS Hero... ZFS? What? When?

Post by QNAPDanielFL »

If anyone wants to know more about QTS Hero, this might answer some of your questions.https://www.youtube.com/watch?v=LQtqahHh4d4
User avatar
Trexx
Ask me anything
Posts: 5393
Joined: Sat Oct 01, 2011 7:50 am
Location: Minnesota

Re: QTS Hero... ZFS? What? When?

Post by Trexx »

Thanks for sharing this Daniel. Any projected eta for the QTS Hero beta??
Paul

Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2's
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350

Model:TVS-673 32GB & TS-228a Offline[/color]
-----------------------------------------------------------------------------------------------------------------------------------------
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle's QNAP Faq
Locked

Return to “Presales”