Dear NethServer community, dear developers
In preparation for the NethServer8 upgrade, I have now dealt with many aspects of the new system for months, which could be an obstacle. Nethserver 8 has developed excellently - no, you developed it excellently - so that an upgrade to Nethserver8 hardly stands in the way.
Unfortunately, the hurdle in the reliable integration of purpose -optimized data storage still remains for me. I would like to discuss this again in order to enable me to make a viable way of moving or continued operation.
I ask for help to understand various behaviors of the Nethserver8 - for the development of solutions for myself and possibly also for the NethServer8 further development.
Why purpose -optimized data storage? This not only affects the sheer size of the respective memory, which would also be accessible by choosing a growing file system. Rather, it means that the properties of the storage media, e.g. Harddisks (or even band drives) for archives or rarely used data, SSDs for emails and files and special SSDs (or even RAM) are intended for databases.
This - not so exotic - in my eyes - was previously also fulfilled in Nethserver 7. You could integrate your own data storage into almost every path, often during installation. And even afterwards everything was freely adjustable. Once set up, this worked reliably and transparently, thoughts had to be done at most about the type of mount, if the presence of “Lost + Found” led to irritation to services.
So why do I expect difficulties in Nethserver8?
Reliable mounts need reliably (predictable) paths, because no mount without a target path. And no reliable mount without a reliable target path. And here the sleepless nights begin, because in Nethserver8 the structure of the users (containers) is not determined by the admin, but solely by Nethserver8 - which is obviously good reasons for.
What initially looked for me after an accidentally created dead end in development turned out to be a restriction that is probably in the user logic of the container operation itself. Each container conveniently works in its own user context (including the associated right), which is why it only seems obvious in the nothing usual /home structure. To date, there is no problem, even within “/Home” everything can be mounted freely and down as you like. You only have to reliably know what is going on in “/home” or with which names (user names, IDS).
However, this managed the Nethserver8 itself - and very strict - and its way of working is so far only very limited and predictable for me - and unfortunately not at all taxable.
The reason for this strict management seems to be that during the Cesamt cluster life there must never be a situation in which a user name (or the user ID) suddenly occurs several times. This should be excluded reliably, even if containers are moved between nodes or restored from a backup.
Accordingly, the Nethserver8 itself decides which (new) users/ID awards it for a container. And apparently he has registered a database or list or the like in which he already used or burned IDS and possibly also other methods (path test?) Using which he may jump to the next iteration of the ID. Did I understand that up to this point?
The (for me) observed behavior seems to point out that IDS remained awarded once. This would at least mean that after the allocation (and the system of the folders) you can at least subsequently set your mounts.
For this I have the following considerations/questions:
Normal operation:
How much can you rely on the fact that the once awarded ID (and thus the path) remains?
Is it to be expected that the ID (and thus the path) awarded will change unexpectedly without any changes to the configuration?
If the ID no longer changes in “normal operation”, this would be at least a limited way for me.
But of course special cases would still have to be considered:
Under what circumstances is it even conceivable that the ID awarded (and thus the path) will ever change?
Under what conditions is an ID actually “skipped” (in the event of creation, restore or move)?
Container updates:
What happens technically during a container update?
The ID probably remains, but do you change (possibly temiprär) path within the container?
Which (drive) paths are also reliably stable with updates and which ones may be temporarily renamed/postponed?
An unhappily chosen mount path would otherwise crash the update process (or vice versa).
Of course, it would not be possible to live without container updates.
Restoration:
What happens to the ID when the container is restored from a backup?
Will a new ID be awarded or the existing ID continues to use?
How do you have to imagine the restoration?
Is the existing user path (my mount) reused directly so that the mount continues to work during the restore?
Or the restore first delete the corresponding user folder (and or user) and then put it back?
Then the mount would be broken, all recovered files would be restored on the wrong data storage!
I could live without integrated restore, it would not be nice.
Moving in the cluster (for me would be dispensable):
What happens to the ID when the container moves in the cluster?
The special case “move” seems to me to be problematic about Mounts:
The surrounding container does not yet have the new goal. Accordingly, there are no laid -out mounts. The moved container would initially land on the main drive or at best on a drive mapped according to “/home”.
Of course, you could then move back to a suitable data storage.
The “standard store” should be at least sufficiently large for the temporary container parade,
What he does not or may not be depending on the container size.
Upgrade Nethserver7 to Nethserver8:
O.G. also justifies another problem for me when upgrading.
During the moving process, the Nethserver8 creates the new containers (IDs and paths) and then also copies the data.
This would initially land all the data moved on a wrong (too small) drive. So the move fails.
(I haven’t tried this yet because of other fundamental questions)
A proposal in the forum was to first take parts of the data manually from the source path and to copy it manually later.
That would be enough for the pure upgrade.
But as described, is the problem not only when upgrading but when moving later in the cluster?
So why not find a solution for this in principle?
My idea was initially to simply create the mounts and paths in advance before update, move or restore were started.
But you should have:
-
The IDS to be awarded can reliably pretend - I cannot do that
-
The path prepared self -prepared should not lead to skipping the predicted ID
-
could be solved if necessary - display “Your next ID for service XYZ”
-
Would mean adjusting the test procedure. But there are probably not no reason.
Instead, I would have the following idea, which may be easy to implement:
The processes “container creation” and “data migration” should be separated in such a way that these are (if necessary) can be broken.
(Hopefully there is a suitable moment in the scripts)
After the “container creation” and with it the ID/path, the admin would then have the opportunity to integrate its own drives.
Then you could continue with the process of “data migration”.
Accordingly, it could also be done in the event of a container move.
First the “container creation”, then a break for the integration of drives, then the “data move”.
Should a recovery also require the deletion of existing paths or create new IDs/paths,
A corresponding “break” in the recovery process would also be a way to the goal.
I could imagine that such a “break option” is a sinful expansion for the Nethserver8 with regard to the scenarios I described.
It would be fantastic to be able to create the mounts directly in the GUI (similar to Raid and NFS in Nethserver7),
But I don’t want to overdo it …
In the meantime it would be helpful for me to get competent answers to the above questions
Or even a hint where I can install a break in the “moving script”.
I would very much like to move to Nethserver8 and would be happy to receive appropriate feedback.
Greetings Yummiweb