Predictable volume mount point

A big request to the developers of NS8 regarding the (future) naming of service instances and its numbering scheme:

For me(!)* it is a huge problem in NS8 that the services (containers) are named using a (non-manageable) numbering scheme and are therefore stored in an unmanageable storage path.

  • For example, I have an SMB share (file archive) that is several TB in size and would just take up space on an SSD.

One of the great strengths of Linux-like systems is the ability to mount drives in paths in a system-transparent manner. This is not just a question of the available space for the service data (this could be expanded using an LVM) but also a strategic question, depending on which service data is involved. For example, you might want to store databases on a drive with particularly good access times, while you might also be able to store rarely used data (e.g. an archived mail box) on a slow drive.

Keeping system and data areas on different drives was and still is a good recommendation. The current installers of the underlying operating systems (Debian or RedHat clones) also provide for this.

Since the ascending service or container numbering in NS8, this method unfortunately no longer works - it is actually sabotaged - because the numbering (and thus the paths) is neither manageable nor reliably predictable (if a container with a certain number already existed before, it is skipped, and there are also surprises when moving or restoring).

Instead of an automatic increment, couldn’t you alternatively build in an option to assign a certain container “ID” when creating the service or container? So that this also defines the future (or existing) path of the container? I don’t imagine that would be particularly difficult.

Of course, moving a service/container to another node would no longer fit the existing mount, but it would be reliably predictable on the new node.

Unfortunately, at the moment I can’t see many viable tricks for distributing data across different suitable drives - so that’s not a solution.

2 Likes

Interesting points, but help me to understand the problem before going straight to a solution.

Issue: mount a disk on some predictable filesystem path, but you can’t predict the path of a volume used by a container because the Unix user name is generated with an incremental number.

Suppose you can predict (or assign) the Unix user name. The OS still generates an incremental uid:gid.

Furthermore the container will use an incremental subuid:subgid range for its user namespace. Data is written into the mounted disk, but the owner still depends on the OS uid incremental allocation.

As consequence, you can’t use the disk contents by simply mounting it somewhere else. First you must fix the ownership. Consider that subuid and subgid ranges are about 65000 ids wide.

I renamed the topic: did I understand well?

1 Like

The mount consists of a source and a target. The source is, for example, the drive, represented by the drive ID or another definition.

I am concerned with the target definitions, i.e. the target paths.
This could be, for example:

DOVECOT
/home/mail1/.local/share/containers/storage/volumes/dovecot-data/_data/USERNAME

SAMBA
/home/samba1/.local/share/containers/storage/volumes/shares/_data/SHARENAME

NEXTCLOUD
/home/nextcloud1/.local/share/containers/storage/volumes/nextcloud-app-data/_data/

If “/home/mail1”, “/home/samba1” or “/home/nextcloud1” were precisely predictable (under all circumstances), you could mount the service data on a completely different physical (or virtual) drive, you know?

I did it this way for many years in Nethserver 7 and there were never any problems (as expected). In NS8, this is THE showstopper for me personally, which makes direct migration impossible.

On the surface, this may seem like a rather specific requirement, but in my eyes, Nethserver has been much more than just a simple GUI for low-demand users. It was a mixture of both, you could choose the easy way - but also make special modifications to the services if necessary. There was a lot of information available about the services themselves, “only” the E-Smith system had to be kept in mind and integrated.

A concrete process could (in my case) be:

  • Start the installation of the “service or container app”, select the service name (container name) or its (fixed) ID.

  • Display the container path (practical, but not mandatory)

  • Display the required file/folder permissions that NS8 needs to carry out the next steps without errors (that would be cool)

  • The administrator would now have the option of integrating the mount into the system by hand - wherever the source is.

  • Continuation of the installation of the “app”

Thank you for your quick response,

Regards Yummiweb

Addition:
Yes, an equally unpredictable user name (of the container) would probably also be a problem.

I think I misunderstood your question.

The primary problem is the unpredictability of the container path. Another drive could be mounted in this path - if predictable.

The system then writes the user/group names and permissions into this mount completely transparently - assuming that the system (the app installer?) is allowed to do whatever it wants there.

2 Likes

I see your point.

This happens also in NS7 where it is less impacting. It was the main reason why in NS7 Hotsync didn’t work, and caused many backup/restore issues too.

Do you see my point?

1 Like

I understand that every container needs its own user:group. I don’t see this as a problem, as this combination can also be predefined.

Don’t get me wrong, I’m not talking about subsequent redefinition (although that would be nice), I’m talking about predictability or prior definition.

As soon as I can see (or can predict) that, for example, a mail service is called mail_desiredname or mail_desiredid, I can mount a drive in this path - as long as the rest of the path already exists. The installer then sets its user:group definitions as it needs - as long as the rights of the folder I created allow this (so I would need to know which ones these should be).

Under Nethserver 7, for example, I liked to mount entire areas on other drives, e.g. /var/lib/nethserver/ibay. In principle, this would also work with NS8, if, for example. SMB containers would be located under
“/home/APP/ID$/” instead of under “/home/app1/”.

But under NS8 this is neither possible in general nor specifically - and that’s not very nice.

1 Like

Hi @yummiweb

Not quite true…

All allowed / supported base OS do allow, like NS7, remote mounting via NFS - of almost anything you want and can.

As such it would be possible to mount ANY used path on some other storage, be it local or remote (This may take some trickery, as used paths when NS8 is running can’t easily be changed if Apps are using them). Permissions need to be correct, as with any UN*X or Linux mount.

But I fully agree with you that, including Storage and the random naming / uid issues make for a really BIG sized PITA…
Disaster Recovery can easily become a headache or non working - even when using VMs… :frowning:

I do also understand the constraints of Containerization ( @davidep ), yet I also see that large scale Container Systems do work. Tools like Kubernetes can provide for highly scalable environments, including storage… This happens all within seconds, need 100’000 MariaDB servers and Storage for them, no problem… It CAN work also for NS8!

Rome wasn’t built in a day, as the saying goes. Any visitor to Rome can confirm: they are still building…

I do know NS8 is still building, NS8.2 are in the works, maybe even NS8.3 already… :slight_smile:

My 2 glowing pieces of coal
Andy
(aka Devils Advocate here!)

1 Like

@Andy_Wismer @davidep

It is still a need to make it clear to me that the first version of my automatically translated text contained the wording " - and that is a real shame.", which was unfortunately extraordinarily unhappy. Unfortunately, I only noticed this just after sending it and then edited directly into the less strong wording “and that’s not very nice” - but unfortunately that was already in the world. I make an effort to avoid such things in the future.

“I do know NS8 is still building, NS8.2 are in the works, maybe even NS8.3 already… :slight_smile:”

That’s right, but you can only do a little better if you become aware of what can be improved.

I do not think that the composition of the paths was defined solely due to time constraints, I could rather imagine that the scenario I described did not think of time due to time constraints.

I personally think it is important, precisely because the overall system should be able to play its full strength - and therefore I had now spoken out again.

Since a short -term implementation can probably not be expected and due to the implications on existing NS8 installations, may never - if I were grateful for any further idea, e.g. how I can “fix” the current name in such a way that this does not more changes.

I also have no idea about the usual upgrade process of the containers, i.e. whether these containers or their data area are created for example with alternative names when upgrading and subsequently moved correctly or the like. Suddenly it would be empty because the mountain goal is no longer identical. Or that a shift carried out in the context of an upgrade would suddenly become a copy with the result of space problems or similar.

Is something on the container path or within the container path (in the data path) during a container upgrade or is an upgrade carried out directly in existing paths?