Disk capacity after volume creation

Questions about SNMP, Power, System, Logs, disk, & RAID.
Locked
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Disk capacity after volume creation

Post by stikoz »

Hi!

I just set up a TS-431P3 with 4 brand new disks.
Array 1 : sata 2tb
Array 2 : sata 2tb
Array 3 : sata 3tb
Array 4 : sata 8tb

I first created a Raid1 volume with the two former.
The capacity is 1.81tb, which is pretty normal as manufacturers sell 2,000,000,000 bytes as 2tb (instead of 1024x1024x104x1024 bytes).
Then I created a volume on it.
I was wondering why it forced me to encrypt it (and store a dedicated key) for a static/thick volume creation.
So, I went for thin volume instead, without encryption.
The available capacity on that thin volume is 1.62tb.

What? Where are the nearly 200gb lost in the process?
Ok, it told me that it reserved some space for the system (85gb).
And now, where are the 100gb left?
Finally I'm left over with 1.6tb because 16.5gb (1%) is already used on that Raid1 volume.
At this point WEB and PUBLIC default shares are empty so I don't see where do those 16.5gb come from.

Never mind.
I managed to create single static volumes for the two laters.
Volume1 should be 2.73tb for 3tb given.
Volume2 should be 7.28tb for 8tb given.
Guess what, volumes are respectively 2.52tb and 6.74tb.
Again 200gb stolen on the first one.
I know manufacturers are liars when they sell you a 8tb, but ending up with 6.74tb of available capacity is not acceptable!
Btw, I used 100% of disk capacity for each volume and 4K clusters.

Are you experiencing the same?

Cheers.
Last edited by stikoz on Wed Apr 07, 2021 4:37 am, edited 1 time in total.
User avatar
dolbyman
Guru
Posts: 35273
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Disk capacity after volume creation

Post by dolbyman »

if you check the MD

Code: Select all

cat /proc/mdstat 
You will see all the hidden system partitions on the drives each NAS drive will have at least 3 (System volume drives even 4)

That's how these NAS work, the system is on the drives distributed in several spanning RAID1
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Re: Disk capacity after volume creation

Post by stikoz »

Hi Dolbyman,

Thanks for that quick reaction.
I'll check via ssh and can imagine indeed how the system is deployed on the first disk/raid array.
But normally that shouldn't be the case on the next single disks.
And 200gb on the 3tb, is already way more than what Windows could eat.
But more than 500gb on the 8tb, even with swap partitions included... just unbelievable.
Last edited by stikoz on Wed Apr 07, 2021 4:36 am, edited 1 time in total.
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Re: Disk capacity after volume creation

Post by stikoz »

Here are the results for disk3 and disk4 when i ran #parted -l
Imo, that seems quite normal, there is around 10gb of dedicated partitions per disk.
But there should be sth. else within partition3, consuming the missing capacity.

‌‌Model: WDC WD30EFRX-68AX9N0 (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext4 primary
2 543MB 1086MB 543MB primary
3 1086MB 2991GB 2990GB primary
4 2991GB 2992GB 543MB ext3 primary
5 2992GB 3001GB 8554MB primary

Model: TOSHIBA MG06ACA800E (scsi)
Disk /dev/sdd: 8002GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext4 primary
2 543MB 1086MB 543MB primary
3 1086MB 7992GB 7991GB primary
4 7992GB 7993GB 543MB ext3 primary
5 7993GB 8002GB 8554MB primary
User avatar
OneCD
Guru
Posts: 12155
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: Disk capacity after volume creation

Post by OneCD »

Hi and welcome to the forum. :)
stikoz wrote: Wed Apr 07, 2021 12:42 am The capacity is 1.81tb, which is pretty normal as manufacturers sell 2,000,000,000 bytes as 2tb (instead of 1024x1024x104x1024 bytes).
...
I know manufacturers are liars when they sell you a 8tb, but ending up with 6.74tb of available capacity is not acceptable!
( You're missing 3 zeros - that's 2GB there. ;) )

If everyone used the right notation there'd be no-issue.

HDD manufacturers are doing the right-thing by selling 2TB drives. They use the notation 'TB' - which is correct.

It's the rest of us expecting 2TB ((2 x 1,000 ^ 4) / 1,000 ^ 4 bytes) to equal 2TiB ((2 x 1,000 ^ 4) / 1,024 ^ 4 bytes) - which it doesn't - who are doing the wrong thing.

So your 2TB drive has 1.86TiB of raw capacity.

With regard to your size calcs, please note: you will also lose space due to file-system formatting. If you're interested, I did a write-up on this a while back before switching to LVM: viewtopic.php?f=25&t=127841

ImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImage
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Re: Disk capacity after volume creation

Post by stikoz »

Hi OneCD,

Thanks for correcting me about the missing zeros.
I carefully read your topic before posting mine yesterday.

I do agree that:
- you need some space for the system
- dedicated partitions are created for that
- you will lose some capacity due to how manufacturers calculate the disks size

But we're not speaking about 10GiB nor even 5% of the disk here.

8TB > 7.28TiB > 6.74TiB available (0.54TiB lost)
3TB > 2.73TiB > 2.52TiB available (0.21TiB lost)
2TB > 1.81TiB > 1.62TiB available (0.19TiB lost)

I basically lost 1 TiB, so when it comes to capacity management, that really matters.

Btw, I just made a test on the 3TB disk, and formatted it with 64K blocks instead of 4K.
The result is that the available capacity at start grew from 2.52TiB to 2.68TiB which seems more acceptable.
Any idea why?
User avatar
OneCD
Guru
Posts: 12155
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: Disk capacity after volume creation

Post by OneCD »

stikoz wrote: Wed Apr 07, 2021 4:37 pm I do agree that:
- you need some space for the system
- dedicated partitions are created for that
- you will lose some capacity due to how manufacturers calculate the disks size
... and you lose space because of the EXT4 file-system layout (completely separate from the OS system files and system partitions).

In the old topic I linked to, inodes were the greatest loss: nearly 94 billion bytes were lost because EXT4 needed them to organise all files in the filesystem. The size is lost at the time of filesystem creation, is outside the filesystem "userspace" and remains static no matter how many files are actually on the filesystem.

Also, with LVM, things get more complicated and more space is lost (it should be quite a small amount, but I've never checked). Might be a worthwhile project for someone to investigate disk-usage with LVM in-depth and post their findings. Feel like taking this on? :geek:
stikoz wrote: Wed Apr 07, 2021 4:37 pm But we're not speaking about 10GiB nor even 5% of the disk here.

8TB > 7.28TiB > 6.74TiB available (0.54TiB lost)
3TB > 2.73TiB > 2.52TiB available (0.21TiB lost)
2TB > 1.81TiB > 1.62TiB available (0.19TiB lost)
The losses add-up when you create separate filesystems across multiple arrays, each with their own overhead. It's more efficient to combine all drives into a single array with a single filesystem (best-done when all drives are the same size). With different sized drives, there's not much that can be done - you'll have to accept less efficient storage when using a non-optimal setup.
stikoz wrote: Wed Apr 07, 2021 4:37 pm Btw, I just made a test on the 3TB disk, and formatted it with 64K blocks instead of 4K.
The result is that the available capacity at start grew from 2.52TiB to 2.68TiB which seems more acceptable.
Any idea why?
Best-guess: less inodes were needed, which left more space for your files.

Also note that QNAP don't make use of the binary prefixes mentioned earlier, so never trust the accuracy of any byte numbers shown in the QTS UI. ;)

ImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImage
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Re: Disk capacity after volume creation

Post by stikoz »

Thanks, you made me searching the right direction.

Regarding LVM, I was able to get the usage and found out that it basically uses:
75.54GiB on my 8TB drive
27.90GiB on my 3TB drive
18.64GiB on my 2TB raid1

Code: Select all

[~] # pvscan
  PV /dev/md3   VG vg289           lvm2 [7.27 TiB / 0    free]
  PV /dev/md2   VG vg288           lvm2 [2.72 TiB / 0    free]
  PV /dev/md1   VG vg290           lvm2 [1.81 TiB / 0    free]
  Total: 3 [11.80 TiB] / in use: 3 [11.80 TiB] / in no VG: 0 [0   ]

[~] # lvscan
  inactive          '/dev/vg289/lv546' [74.54 GiB] inherit
  ACTIVE            '/dev/vg289/lv3' [7.20 TiB] inherit
  inactive          '/dev/vg288/lv545' [27.90 GiB] inherit
  ACTIVE            '/dev/vg288/lv2' [2.69 TiB] inherit
  inactive          '/dev/vg290/lv544' [18.64 GiB] inherit
  ACTIVE            '/dev/vg290/lv1' [1.79 TiB] inherit
And if it's only that, i'm just fine with it.

That means:
7.20/7.27TiB available capacity on the 8TB drive
2.69/2.72TiB available capacity on the 3TB drive
1.79/1.81TiB available capacity on the 2TB raid1

For me the issue is located in the new way QTS is mounting the drives.
If I have a look at what's mounted on my old TS-410, I see both, the 1.5TB raid1 and 2TB single drive, reflecting the real capacity in TiB of each.

Code: Select all

Filesystem                Size      Used Available Use% Mounted on
/dev/sda4               371.0M    355.3M     15.7M  96% /mnt/ext
/dev/md9                509.5M    181.2M    328.2M  36% /mnt/HDA_ROOT
/dev/sdd3                 1.8T      1.6T    168.2G  91% /share/HDD_DATA
/dev/md0                  1.3T      1.1T    255.6G  81% /share/MD0_DATA
tmpfs
When I do the same on the new TS-431P3, there the missing capacity is showing up for cachedev1, 2 and 3.
Why is it mounted from /dev/mapper instead of /dev/sdXX ?

Code: Select all

Filesystem                Size      Used Available Use% Mounted on
none                    420.0M    382.1M     37.9M  91% /
devtmpfs               1004.5M     64.0k   1004.5M   0% /dev
tmpfs                    64.0M      2.8M     61.2M   4% /tmp
tmpfs                  1015.3M      1.1M   1014.2M   0% /dev/shm
tmpfs                    16.0M         0     16.0M   0% /share
tmpfs                    16.0M         0     16.0M   0% /mnt/snapshot/export
/dev/md9                493.5M    129.5M    353.6M  27% /mnt/HDA_ROOT
cgroup_root            1015.3M         0   1015.3M   0% /sys/fs/cgroup
/dev/mapper/cachedev1     1.7T     16.2G      1.7T   1% /share/CACHEDEV1_DATA
/dev/mapper/cachedev2     2.5T     88.0M      2.5T   0% /share/CACHEDEV2_DATA
/dev/mapper/cachedev3     6.7T      4.4T      2.4T  65% /share/CACHEDEV3_DATA
/dev/md13               417.0M    297.6M    119.4M  71% /mnt/ext
tmpfs                    48.0M    480.0k     47.5M   1% /share/CACHEDEV1_DATA/.samba/lock/msg.lock
tmpfs                    16.0M         0     16.0M   0% /mnt/ext/opt/samba/private/msg.sock
Last edited by stikoz on Thu Apr 08, 2021 11:33 pm, edited 1 time in total.
User avatar
dolbyman
Guru
Posts: 35273
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Disk capacity after volume creation

Post by dolbyman »

stikoz wrote: Thu Apr 08, 2021 10:53 pm When I do the same on the new TS-431P3, there the missing capacity is showing up for cachedev1, 2 and 3.
Why is it mounted from /dev/mapper instead of /dev/sdXX ?
Because the old NAS is a Legacy CAT1 NAS that does not handle storage pools .. just static legacy volumes

The new one handles storage pools (thin/thick) or static volumes
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Re: Disk capacity after volume creation

Post by stikoz »

Ok, good to know.

Does that mean that storage pool mechanism requires 500GiB on a 7.20TiB partition, in order to work properly?
Because I really do not need such a feature.
I first created a storage pool for the 2x2TB raid1 array, thinking it was mandatory for that kind of setup.
Then I figured out you can just create a static volume and select both disk to get the raid1 setup.
So actually all 3 volumes are static now.
User avatar
dolbyman
Guru
Posts: 35273
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Disk capacity after volume creation

Post by dolbyman »

the extra LVM layer for the pool, might cause such an overhead yes ..

If you don't need the features of storage pools (snapshots, multiple volumes per raid group, block ISCSI LUN,etc) then a static volume is just fine
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Re: Disk capacity after volume creation

Post by stikoz »

So, I must probably resign at this point, but 7% capacity lost on all volumes.
That's quite a hard price to pay... for a feature I don't want.
stikoz
Starting out
Posts: 10
Joined: Thu Aug 15, 2013 9:08 pm

Re: Disk capacity after volume creation

Post by stikoz »

OneCD wrote: Thu Apr 08, 2021 6:36 am It's more efficient to combine all drives into a single array with a single filesystem (best-done when all drives are the same size). With different sized drives, there's not much that can be done - you'll have to accept less efficient storage when using a non-optimal setup.
I don't really agree with that statement because from the investigation i made;
[+] primary partitions are created on each disk whatever the setup is (around 10GiB)
[+] lvm partitions are basically a percentage of each primary partition size (1%)
[+] virtual disk mapping overhead is also a percentage of each volume created (around 7%)*
So with an identical number of disks, it will be the same capacity lost for 2+6TiB JBOD, 2x4TiB RAID0 or 5TiB&3TiB SINGLES

* Using 4K blocks to format (larger blocks tend to decrease the overhead capacity required)
User avatar
OneCD
Guru
Posts: 12155
Joined: Sun Aug 21, 2016 10:48 am
Location: "... there, behind that sofa!"

Re: Disk capacity after volume creation

Post by OneCD »

stikoz wrote: Fri Apr 09, 2021 9:29 pm So with an identical number of disks, it will be the same capacity lost for 2+6TiB JBOD, 2x4TiB RAID0 or 5TiB&3TiB SINGLES
Although, you haven't tried a single userdata filesystem on top of a single array - spread across multiple drives with the same capacity. My point is that configuration is more efficient than the configs you've listed. ;)

For example: "single" drives have 5 partitions instead of 4.

ImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImageImage
Locked

Return to “System & Disk Volume Management”