Important questions about instance numbering and disk mapping under NS8

Hello dear community,

Could someone please explain to me the concept behind the numbering of the “Service” instances? With the release of NS8, that should be certain by now?

The first instance has the 1 (who would have thought that), the next has the 2 etc. of course, but if an instance is deleted again, this number no longer seems to be assigned to further instances? As part of various tests, I tried different things and asked myself why and whether this will change? What is the mental background or how can behavior be predicted?

How does this work in combination with moves between nodes? At first I assumed that the number was kept free in case you wanted to retrieve it from a backup later. Does the instance then get “its” former number back or a new one?

Can this number (the path) be changed (again) afterwards?

What happens when moving from one node to another node? How is the numbering handled so that there are no number conflicts? Or is the numbering always carried out across nodes? But what if single nodes are added to the cluster later and? (and is that even possible?)

Why am I asking all this? First of all, it was a more cosmetic problem, I prefer it if the structures are as usual. Then I noticed that this numbering also affects the paths, oh dear…

Because specifically, the concept of numbering poses problems for me:

In Nethserver 7 (no, please don’t answer that this is a thing of the past) I mounted various drives on the respective data folders, especially on those that can often become very large like “/var/lib/nethserver/ibay”, “/var /lib/nethserver/home”, but also “/var/lib/nethserver/vmail”.

Almost regularly I put at least “/var/lib/nethserver” on a separate drive. (Note to VM enthusiasts: yes also for VMs with virtual drives and there are good reasons to do it that way).

But I see a problem with NS8 because

  1. a reliable mount will not work if the path is not reliably the same and (if) there is a risk that the numbers may change during some instance move(?).

  2. Because I’m not sure at what point before or during the creation of the new instance I can safely prepare the corresponding mount point* without disrupting the installation procedure. There is no “pause switch” in the installation or a “start/stop switch” in operation. So I would at least have to know what the number of the instance that I’m about to create will be, otherwise I don’t know the mount point and can’t prepare it. And what about the annoying lost+found folders that are (somewhere) in the container path?

  • Which folders in the container would it be advisable to mount to? If you want to create the mount point before installing the container instance, you only have the base path, e.g. “/home/samba1”. But actually you only need the much deeper “Volumes” path.

But what about updates (or upgrades) of the containers? How does this work in the background? Is it always “updated” to the existing paths or are new ones created for the update and then moved to the existing paths? Even the “Volume” folder to get him out of the field of fire for the update?

What would be the most recommended method in the NS8 concept to separate system and user data? I imagine mounting a drive on /home would be unproblematic, but in doing so you not only separated the user data but also essential parts of the “services” or instances.

I would be happy to get some clarification on this. Without a suitable strategy, I cannot, for example, practically migrate my home server (which would be the first) to NS8. Several large drives are mapped into these and it would be very impractical to provide a single, huge drive for the migration.

Kind regards
Yummiweb

3 Likes

Similar problem here, so I would be interested in solutions to these questions as well.

1 Like