Reported vs actual data usage is invalid: 10 TB missing

Questions about SNMP, Power, System, Logs, disk, & RAID.
hermanc
Starting out
Posts: 20
Joined: Fri Jan 15, 2016 9:12 pm

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by hermanc »

Code: Select all

zpool list
NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zpool1   898G   132G   766G    14%  1.02x  ONLINE  -
zpool2  28.9T  7.53T  21.4T    26%  1.00x  ONLINE  -
So 21.4 TB reported to be free on zpool2 but GUI reporting something completly different.
User avatar
dolbyman
Guru
Posts: 35024
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by dolbyman »

No that is just the overall size of all vdevs (RAID level gets deducted from that)

The actual ZFS size for zpool2 should be shown like this

Code: Select all

zfs list zpool2
hermanc
Starting out
Posts: 20
Joined: Fri Jan 15, 2016 9:12 pm

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by hermanc »

Code: Select all

zfs list zpool2
NAME     USED  AVAIL  REFER  MOUNTPOINT
zpool2  18.9T  1.19T   240K  /zpool2
Last edited by dolbyman on Thu Jun 08, 2023 1:59 am, edited 1 time in total.
Reason: added code tags
User avatar
dolbyman
Guru
Posts: 35024
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by dolbyman »

Sorry I lost overview .. does these 18.9T fit into what QuTS is showing you as overall storage ?

If so, it would not be a QuTs display issue but (actual) used storage on ZFS level.
hermanc
Starting out
Posts: 20
Joined: Fri Jan 15, 2016 9:12 pm

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by hermanc »

No problem that you lost overview, I'm very glad you try to help me figure this out.

The reported 18.9 T reported used via zfs list zpool2 seems to tie-in with the reported 18.97 TB data used as per GUI, see the attachment.

The difference between the 19.9 T and the overall available space of 21 TB is +/- 1.19 TB and due to unallocated space.

The conclusion would then be that the missing 6 TB is used storage on ZFS level. How do I investigate this further?
You do not have the required permissions to view the files attached to this post.
User avatar
dolbyman
Guru
Posts: 35024
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by dolbyman »

Unless you have tons of tiny files that would create a large block size overrage ..I'd say open a ticket with QNAP to have them investigate
hermanc
Starting out
Posts: 20
Joined: Fri Jan 15, 2016 9:12 pm

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by hermanc »

Some of my shared folders do have tons of tiny files. My book collection folder for instance has 22.000 folders in which there are 3 files each. With a file size very often 500KB to 1MB..
hermanc
Starting out
Posts: 20
Joined: Fri Jan 15, 2016 9:12 pm

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by hermanc »

In any case, I will chase QNAP with a ticket and investigate with them further. Thank you very much for your time and patience @dolbyman!
User avatar
dolbyman
Guru
Posts: 35024
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by dolbyman »

Please keep us posted what they said

Thanks
hermanc
Starting out
Posts: 20
Joined: Fri Jan 15, 2016 9:12 pm

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by hermanc »

QNAP helpdesk asked my to run this

Please execute the following in the NAS SSH environment with your administrators access.
zfs get all zpool2 |grep "reserved"
zfs list | grep zpool2 | grep snapshot
zfs list | grep zpool2 | grep share
zfs list | grep zpool2 | grep zfs20 | grep share
zfs list | grep zpool2 | grep zfs21 | grep share
zfs list | grep zpool2 | grep zfs23 | grep share
zfs list | grep zpool2 | grep zfs24 | grep share
zfs list | grep zpool2 | grep zfs25 | grep share
zfs list | grep zpool2 | grep zfs27 | grep share
zfs list | grep zpool2 | grep zfs531 | grep share
zfs get all zpool2/zfs20 |grep "reservation\|used"
zfs get all zpool2/zfs21 |grep "reservation\|used"
zfs get all zpool2/zfs22 |grep "reservation\|used"
zfs get all zpool2/zfs23 |grep "reservation\|used"
zfs get all zpool2/zfs24 |grep "reservation\|used"
zfs get all zpool2/zfs25 |grep "reservation\|used"
zfs get all zpool2/zfs27 |grep "reservation\|used"
zfs get all zpool2/zfs531 |grep "reservation\|used"

If you focus on the last 8 commands, the output is pasted below.

The total "snap_refreservation" on the various shares amounts to a whopping 10.041 TB / 10 TB. With 7 TB "real" data and Overprovisioning pool (4%) at 1 TB and Snapshots guaranteed space at 1 TB and Unallocated space at 1 TB the server is indeed full. So I am going to cut down on snapshots, in order to get server space again.


[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs20 |grep "reservation\|used"
zpool2/zfs20 used 3.58T -
zpool2/zfs20 reservation none default
zpool2/zfs20 refreservation 500G local
zpool2/zfs20 usedbysnapshots 1.89G -
zpool2/zfs20 usedbydataset 437G -
zpool2/zfs20 usedbychildren 192K -
zpool2/zfs20 usedbyrefreservation 500G -
zpool2/zfs20 physicalused 437G -
zpool2/zfs20 logicalused 437G -
zpool2/zfs20 snap_refreservation 3.09T local
zpool2/zfs20 usedbysnaprsrv 2.66T -
zpool2/zfs20 overwrite_reservation 437G -

[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs21 |grep "reservation\|used"
zpool2/zfs21 used 2.76T -
zpool2/zfs21 reservation none default
zpool2/zfs21 refreservation 400G local
zpool2/zfs21 usedbysnapshots 204M -
zpool2/zfs21 usedbydataset 335G -
zpool2/zfs21 usedbychildren 192K -
zpool2/zfs21 usedbyrefreservation 400G -
zpool2/zfs21 physicalused 335G -
zpool2/zfs21 logicalused 335G -
zpool2/zfs21 snap_refreservation 2.37T local
zpool2/zfs21 usedbysnaprsrv 2.04T -
zpool2/zfs21 overwrite_reservation 335G -


[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs22 |grep "reservation\|used"
zpool2/zfs22 used 2.80T -
zpool2/zfs22 reservation none default
zpool2/zfs22 refreservation 1000G local
zpool2/zfs22 usedbysnapshots 220M -
zpool2/zfs22 usedbydataset 797G -
zpool2/zfs22 usedbychildren 192K -
zpool2/zfs22 usedbyrefreservation 1000G -
zpool2/zfs22 physicalused 796G -
zpool2/zfs22 logicalused 804G -
zpool2/zfs22 snap_refreservation 1.82T local
zpool2/zfs22 usedbysnaprsrv 1.04T -
zpool2/zfs22 overwrite_reservation 797G -

[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs23 |grep "reservation\|used"
zpool2/zfs23 used 1.38T -
zpool2/zfs23 reservation none default
zpool2/zfs23 refreservation 125G local
zpool2/zfs23 usedbysnapshots 7.46M -
zpool2/zfs23 usedbydataset 102G -
zpool2/zfs23 usedbychildren 196K -
zpool2/zfs23 usedbyrefreservation 125G -
zpool2/zfs23 physicalused 102G -
zpool2/zfs23 logicalused 102G -
zpool2/zfs23 snap_refreservation 1.26T local
zpool2/zfs23 usedbysnaprsrv 1.16T -
zpool2/zfs23 overwrite_reservation 102G -

[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs24 |grep "reservation\|used"
zpool2/zfs24 used 3.03T -
zpool2/zfs24 reservation none default
zpool2/zfs24 refreservation 2.05T local
zpool2/zfs24 usedbysnapshots 148K -
zpool2/zfs24 usedbydataset 1.82T -
zpool2/zfs24 usedbychildren 184K -
zpool2/zfs24 usedbyrefreservation 236G -
zpool2/zfs24 physicalused 1.82T -
zpool2/zfs24 logicalused 1.83T -
zpool2/zfs24 snap_refreservation 999G local
zpool2/zfs24 usedbysnaprsrv 999G -
zpool2/zfs24 overwrite_reservation 24K -

[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs25 |grep "reservation\|used"
zpool2/zfs25 used 5.05T -
zpool2/zfs25 reservation none default
zpool2/zfs25 refreservation 2.93T local
zpool2/zfs25 usedbysnapshots 9.04M -
zpool2/zfs25 usedbydataset 2.12T -
zpool2/zfs25 usedbychildren 184K -
zpool2/zfs25 usedbyrefreservation 2.93T -
zpool2/zfs25 physicalused 2.12T -
zpool2/zfs25 logicalused 2.13T -
zpool2/zfs25 snap_refreservation 500G local
zpool2/zfs25 usedbysnaprsrv 0 -
zpool2/zfs25 overwrite_reservation 2.12T -

[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs27 |grep "reservation\|used"
zpool2/zfs27 used 151G -
zpool2/zfs27 reservation none default
zpool2/zfs27 refreservation 150G local
zpool2/zfs27 usedbysnapshots 148K -
zpool2/zfs27 usedbydataset 44.9G -
zpool2/zfs27 usedbychildren 176K -
zpool2/zfs27 usedbyrefreservation 105G -
zpool2/zfs27 physicalused 44.9G -
zpool2/zfs27 logicalused 45.1G -
zpool2/zfs27 snap_refreservation 1G local
zpool2/zfs27 usedbysnaprsrv 1024M -
zpool2/zfs27 overwrite_reservation 24K -


[XYZ@QNAPTS473A ~]$ zfs get all zpool2/zfs531 |grep "reservation\|used"
zpool2/zfs531 used 307K -
zpool2/zfs531 reservation none default
zpool2/zfs531 refreservation none default
zpool2/zfs531 usedbysnapshots 116K -
zpool2/zfs531 usedbydataset 192K -
zpool2/zfs531 usedbychildren 0 -
zpool2/zfs531 usedbyrefreservation 0 -
zpool2/zfs531 physicalused 140K -
zpool2/zfs531 logicalused 140K -
zpool2/zfs531 snap_refreservation 0 default
zpool2/zfs531 usedbysnaprsrv 0 -
zpool2/zfs531 overwrite_reservation 0 -
hermanc
Starting out
Posts: 20
Joined: Fri Jan 15, 2016 9:12 pm

Re: Reported vs actual data usage is invalid: 10 TB missing

Post by hermanc »

Posting the solution to my problem.

Seems ZFS has/had a bug. I deleted all snapshots, and disabled all snapshots, but still +/- 10 TB free space was missing from my server. Using ChatGPT (!) I was able to dig up the ZFS command to set the snap_refreservation to 10 GB for each node / folder on my system above;

sudo zfs set snap_refreservation=10G zpool2/zfs21
sudo zfs set snap_refreservation=10G zpool2/zfs22
sudo zfs set snap_refreservation=10G zpool2/zfs23
sudo zfs set snap_refreservation=10G zpool2/zfs24
sudo zfs set snap_refreservation=10G zpool2/zfs25
sudo zfs set snap_refreservation=10G zpool2/zfs27

Et voila I got my 10 TB free space back.
Post Reply

Return to “System & Disk Volume Management”