New NS8 Developer Feature: Volume Management (now in testing)

We have introduced a new set of tools and conventions for handling Podman volumes in NS8 modules. These features are currently in testing [1] and will be included in an upcoming release in the next couple of weeks.

This update aims to give module developers better control over data persistence, storage performance, and disk placement policies, while keeping behaviour predictable for Sysadmins deploying your applications.

Below is an overview of the concepts covered in the new Volumes[2] page of the Developer’s Manual.


Why volumes matter

All NS8 modules run in containers, so persistent application data must be stored in Podman volumes. A module can use:

  • bind-mounted directories
  • bind-mounted single files
  • named volumes

For example, the Mail module defines multiple volumes in its dovecot.service unit, mixing named volumes for data and bind mounts for TLS certificates. This pattern is typical for services that combine persistent storage with configuration injected at runtime.

The SELinux :z flag also plays a role here: developers must ensure proper labeling (container_file_t) on files injected from outside the container so the service can access them.

However, the :z flag triggers a recursive filesystem scan that can significantly slow down container startup when a volume contains many inodes. Using :z is therefore a trade-off between startup performance and automatic SELinux label validation. Files created inside the container always receive the correct label, whereas files moved or copied in from external locations do not, and may require relabeling.


Declaring volumes suitable for additional disks

By default, named volumes land under Podman’s storage directory on the root filesystem.
However, some applications benefit from storing large or slow-growing data on a separate disk.

Developers can signal which volumes are suitable for this by setting the image label:

org.nethserver.volumes = dovecot-data

The label accepts a space-separated list of named volumes.
When such a module is installed, the UI prompts the Sysadmin to optionally place those volumes on any additional disks configured on the node.[3]

Hints for developers:

  • additional disks are assumed larger but slower than the root disk
  • list only the volumes that truly benefit from custom placement

This keeps the user experience clean while allowing modules to scale with available storage.


Node-wide volume placement rules

Sysadmins can manage storage policy at the node level through:

  • the configuration file /etc/nethserver/volumes.conf
  • the volumectl command

Assignments defined here map application volumes to custom base directories during module creation. They can even cover volume names not listed in org.nethserver.volumes.

However, UI-selected settings always take precedence over volumes.conf.

volumectl supports listing base paths, assigning volumes for specific apps, limiting assignments to the next installation only, and removing existing rules. This gives administrators fine-grained control over where module data is stored across the cluster.


What this means for module developers

If your module stores persistent data, now is a good time to:

  • review which of your named volumes may benefit from a larger disk
  • add the org.nethserver.volumes label
  • ensure your bind mounts and data directories are consistently labeled for SELinux. Volumes listed in org.nethserver.volumes are candidates to not have :z flag
  • test how your module behaves when volumes are redirected to alternative disks

This feature is already available in testing. We expect to include it in an official release in the next couple of weeks. The UI volume selection will be released in a future milestone.

Feedback from module developers is especially important at this stage — if you try the new system, please share your impressions, edge cases, or integration questions.

Thanks in advance!


  1. Core version 3.14.0-dev.2 and more ↩︎

  2. Volumes | NS8 dev manual ↩︎

  3. The UI prompt will be released in a future NS8 project milesto ↩︎

8 Likes

How to explain this to a non developer? What does it do, what benefits does it bring?

Does it mean it would now be possible to have or implement shared volumes that cut accros separate modules?

We’re working on a long awaited feature announced here:


No, this feature does not aim to it. If you remember me which NS8 modules or what data you want the containers to share, we can discuss the safest method for your specific case. Let’s discuss it in another thread.

4 Likes

The backend has been released in Core 3.14. Sysadmin documentation is now available here: Disk usage — NS8 documentation

2 Likes

Last weeks just busy trying to map my Nextcloud datadirecory to a mounted TrueNas NFS. Rsync-ed to Nextcloud datadir-files to TrueNas. Mount works fine from NS8. I can see the files from NS8 console but didn’t get it working from Nextcloud perspective.

I would love to test this new NS8 Core feature on my Nextcloud. But what should I do? The Sysadmin documentation gives an example with Samba shares and homes but how do I do that with the Nextcloud datadirectory? And after the “volumectl add-volume” it says: “The next time Samba (Nextcloud) is installed on the local node … its volumes will be created …” Can/Should I force this by do a Restore of the Nextcloud App?

3 Likes

I believe this a very fair question towards Nethesis, since both NS8 and Nextcloud belong to the published/supported ecosystem, so no third party involved.

In case somebody is going to say “you need a support contract’ or something similar, that is not how it works to build, maintain and retain community members. Too many have already made other choices and left Nethserver.

I suspect an owner/permission issue.

Yes, after adding a volume, it will be restored to the specified disk, see also Volumes | NS8 dev manual for Nextcloud examples.

1 Like

I’m sorry, since you don’t accept private messages I write you here.

Your comment misrepresents the situation. No one is telling users to get a support contract, and framing it that way is unfair. This is the Development category, where not every issue has an immediate answer. What helps is sharing technical details, not implying neglect or decline. Please focus on the actual problem. Please moderate your comments.

3 Likes

Also Nextcloud may need reindexing?

1 Like

I pushed backuppc with this feature

1 Like

Fair enough, I stand corrected.

3 Likes

I don’t want to hijack this topic but just a report back which may be helpfull for others:

I mounted my Truenas NFS share to NS8 datadirectory:

mount -t nfs 192.168.x.y:/mnt/tank/nextcloud/data /home/nextcloud1/.local/share/containers/storage/volumes/nextcloud-app-data/_data/data

Set the ownership to nextcloud1’s uid/gid (1012)

chown -R 1012:1012  /home/nextcloud1/.local/share/containers/storage/volumes/nextcloud-app-data/_data/data

Set the permissions of the Truenas data folder similar to the original of NS8

chmod 770 /home/nextcloud1/.local/share/containers/storage/volumes/nextcloud-app-data/_data/data

No I could do a “touch” from NS8 console to the Truenas datadirectory but trying to reach Nextcloud in my browser gave still the Nextcloud message: “Error - Your data directory is not writable.”

Checking the NS8 systemlog was pointing out that SElinux was still blocking write access (to a NFS mount).
To allow this it suggested to do a

setsebool -P virt_use_nfs 1

which I did.
In NS8 cluster-admin I restarted the Nextcloud instance
To re-index Nextcloud I did:

runagent -m nextcloud1 podman exec -u www-data -it nextcloud-app php /var/www/html/occ files:scan --all

Solved: My NS8 Nextcloud datadirectory is now residing on my Truenas and working in Webbrowser and Mobile-app!

2 Likes

Hi Joost, thank you for sharing your experience! I’m glad you succeeded with the NFS mount for Nextcloud, but I must point out that the commands you provided are not related to the new feature described above.

This is a direct, custom mount under Podman’s internal _data directory, which naturally requires manual ownership and permission adjustments.

The setroubleshoot suggestion you found enables a boolean intended for virtualization, and I’m not sure if it applies to containers or what side effects it may introduce.

I don’t have much experience with NFS mounts either, and I suspect there may be additional caveats when combining them with Podman rootless volumes. How to properly integrate NFS storage with NS8 apps is still an open research topic and really deserves its own thread.

1 Like