Questions and ideas about the NethServer8 upgrade - the show has to go on

Thank you, Yummiweb, for your time and efforts in preparing for the NS7 to NS8 migration. The way NS8 handles additional disks is still under development, and we plan to work on it in the first quarter of 2025 (see the announcement in NethServer project milestone 8.2).

One approach we’re considering is to allow admins to manage additional storage by following these steps:

  1. mount a disk, permanently, on some directory. E.g. /mnt/disk1
  2. tell NS8 to use it, e.g. by setting some node’s environment variable NODE_ADDITIONAL_DISK_MOUNT=/mnt/disk1, and some additional metadata about disk type, speed, size etc. The NS8 node will initialize the disk with proper sub directories and permissions automatically.
  3. Let NS8 applications decide if they want to use the additional disk or not. App developers define the storage needs for their applications, while NS8 identifies available storage options. Together, they can match the best fit for storage requirements.

This approach does not aim to enforce predictability of directory layouts, as similar issues with file ownership (e.g., unpredictable uid/gid numbers and subuid/subgid ranges) would still arise.

A similar issue with uid/gid numbers exists also in NS7, and that’s one reason why we abandoned the development of Hotsync in NS7. In NS8, the container-based architecture solves it with a great app isolation, but it has a greater complexity.

The mount path for a volume should be permanent; no node events should change it unexpectedly.

However we are still missing a common approach to configure it. For example, this applies to configurations like MinIO and local node backups. Defining a common behavior for the storage management is another goal of the future milestone.

The new app image becomes a replacement of the old one. Volume paths are not changed, as the module name (or “ID”, to use your term).

The restore procedure creates a new module instance, with a different ID than the one that ran the backup. Data is restored into the volumes of the new module.

It is similar to restore. A new module instance is created, and data copied into it. The copy occurs between rsync processes that run inside containers, so the filesystem ownership is correctly mapped.

With the approach described above, volume data would land on the “best disk” according to what is available on the node and the app requirements.

Note that entirely mounting /home/appX to a separate disk can lead to problems with Podman, as discussed in other topics. Instead the above approach only some volumes bind mount are involved.

While available space checks are important, they are separate from the approach described above, though they can coexist effectively.

For experiments with the NS7 migration and volume path assignment, you can remove and re-create Podman volumes before each “Sync” call, with a bind-mount to an arbitrary host path. A working example of podman volume create --opt=device=... invocation is documented here, Backup and restore — NS8 documentation.

Once a volume is created with the correct options it is not modified by the application for the whole app instance lifetime.


In summary, an automatic volume management approach in NS8 would streamline system administration, reducing the need for manual configuration and extensive knowledge of application storage specifics. Configuring each volume storage path manually is difficult and error prone. It requires a deep knowledge of the application implementation. Those details should be known by the app developer, not necessarily by the system administrator. And I’m seeking a solution that simplifies the sysadmin’s life.

1 Like