We have introduced a new set of tools and conventions for handling Podman volumes in NS8 modules. These features are currently in testing [1] and will be included in an upcoming release in the next couple of weeks.
This update aims to give module developers better control over data persistence, storage performance, and disk placement policies, while keeping behaviour predictable for Sysadmins deploying your applications.
Below is an overview of the concepts covered in the new Volumes[2] page of the Developer’s Manual.
Why volumes matter
All NS8 modules run in containers, so persistent application data must be stored in Podman volumes. A module can use:
- bind-mounted directories
- bind-mounted single files
- named volumes
For example, the Mail module defines multiple volumes in its dovecot.service unit, mixing named volumes for data and bind mounts for TLS certificates. This pattern is typical for services that combine persistent storage with configuration injected at runtime.
The SELinux :z flag also plays a role here: developers must ensure proper labeling (container_file_t) on files injected from outside the container so the service can access them.
However, the :z flag triggers a recursive filesystem scan that can significantly slow down container startup when a volume contains many inodes. Using :z is therefore a trade-off between startup performance and automatic SELinux label validation. Files created inside the container always receive the correct label, whereas files moved or copied in from external locations do not, and may require relabeling.
Declaring volumes suitable for additional disks
By default, named volumes land under Podman’s storage directory on the root filesystem.
However, some applications benefit from storing large or slow-growing data on a separate disk.
Developers can signal which volumes are suitable for this by setting the image label:
org.nethserver.volumes = dovecot-data
The label accepts a space-separated list of named volumes.
When such a module is installed, the UI prompts the Sysadmin to optionally place those volumes on any additional disks configured on the node.[3]
Hints for developers:
- additional disks are assumed larger but slower than the root disk
- list only the volumes that truly benefit from custom placement
This keeps the user experience clean while allowing modules to scale with available storage.
Node-wide volume placement rules
Sysadmins can manage storage policy at the node level through:
- the configuration file
/etc/nethserver/volumes.conf - the
volumectlcommand
Assignments defined here map application volumes to custom base directories during module creation. They can even cover volume names not listed in org.nethserver.volumes.
However, UI-selected settings always take precedence over volumes.conf.
volumectl supports listing base paths, assigning volumes for specific apps, limiting assignments to the next installation only, and removing existing rules. This gives administrators fine-grained control over where module data is stored across the cluster.
What this means for module developers
If your module stores persistent data, now is a good time to:
- review which of your named volumes may benefit from a larger disk
- add the
org.nethserver.volumeslabel - ensure your bind mounts and data directories are consistently labeled for SELinux. Volumes listed in
org.nethserver.volumesare candidates to not have:zflag - test how your module behaves when volumes are redirected to alternative disks
This feature is already available in testing. We expect to include it in an official release in the next couple of weeks. The UI volume selection will be released in a future milestone.
Feedback from module developers is especially important at this stage — if you try the new system, please share your impressions, edge cases, or integration questions.
Thanks in advance!