Tools for have a reasonable idea of disk usage

Currently the only information about “disk size” is… here.
https://docs.nethserver.org/projects/ns8/en/latest/system_requirements.html

20GB disk
The above requirements must be increased to match users, applications, and load needs.

A sysadmin might now the size of the file sharing, the size of some databases and mail folders/files. But how can guess any space requirements for… a cluster node?

  • no base OS size estimation is available for install (or a place to look up for)
  • no base tools space requirements (i mean… how much space will be occupied after base tools for cluster are installer?)
  • no “base space occupation” for any kind of empty module/container

Currently… 20gb disk space is “true”, like using 8gb eMMC storage module for install some linux distro. It will install.
After install… how much disk space will be available?

Because (fun fact) containers have bigger storage overhead compared to system services… so any preceiding experience with NS7, when the size are a close call, might be completely useless. On old server… 200gb of SSD space are fine with 25% of space left. On NS8 might be close to 2% of available disk space if modules are “enough”.
And maybe not everyone are virtualizing NS8 with thin provisioning…

I’m sorry I don’t have such estimations, but you are right: disk space requirements seems higher than NS7.

Disk space used by Podman images can be obtained with

podman system df

The --verbose argument counts the space of every volume (data).

A nice optimization would be to store images under a shared directory. I want to work on it some day…

On the system level, the RHEL documentation suggests[1] to use LVM-VDO with containers: it is a block device with data deduplication and compression, AFAIK it works pretty well.


  1. Chapter 3. Creating a deduplicated and compressed logical volume Red Hat Enterprise Linux 9 | Red Hat Customer Portal ↩︎

1 Like

am curiosu how would this work and what would it achieve?

Nice data to present as status report of any container, isn’t it? Into UI, so any sysadmin don’t need to run all the information from console to understand how much space any module/container is using of the host system.

Team do not have to be sorry, rembering all bells and whistles is a tough job while starting the whole mumbo jumbo.
However

do not seem. They are. Any NS7 module was more or less the sum of packages install plus some scripts for collect info and deploy config. Now there are all dependencies (DMBS, for starters), plus network stack, plus tools needed to run the container infrastructure Nethesis flavah and some more.
Disk consuption estimation IMO should be provided for each module, like system requirements or any other form you’re considering suitable.

Otherwise any adopter should run 3 times the master node installation, to achieve a stable deployment and migration should be run at least twice.
Or apply a healthy X 1.5 coefficient to current disk consumption from NS7 as disk requirement.