I am familiar (not skilled, not expert) with type 1 yypervisors/virtualizers. During last decade I tinkered enough on ESX for finding the enviroment familiar, at basic level.
However, I’m totally noob for Type-2 or hosted hypervisors, like Kubernetes, Docker, Podman, and so on. The “capsule” around the app that allow a much tight shrink of the OS around the software is needed to run.
So… I don’t have the “equivalence” for this kind of environment.
I am familiar with host, guest, vswitch, and so on.
I’m aware of the concept of “aqua network”, which is the virtualized network between the container and… what? Still Host? or dock/pier/whathever?
Probably if I spend enough time on NS8, podman, docker documentation I could find most of the answer. But… a short glossary with some schemas might help community members to visualize the roots of NS8 and how to correctly name any part of the “building” I call “container orchestrator”, because is shorter and less confusing of “application hosting platform”, three words to say OS. And for anyone looking for a more specific definition… “web application hosting system” are four words for XAMPP, which currently NS8 don’t look like.
Outside the goal of this topic: i think that now Assumptions section should be reworked a bit for people who are willing to develop modules for NS8.
The glossary section is functional for NS8 current design, however.
Is node the standardized name for container-like environment for the host available for modules/instance/container?
Instance is also the name for a set of feature of a Database server. This term is currently used also for containers, not only for NS8 or podman’s ones?
Definition for leader and worker is understandable, but I know that also a leader node runs instances, and that’s not reported into glossary.
What TLS Terminator is? a TLS Terminator Proxy? How this TLS Terminator “simplify module configuration”?
And these are OT for the “container dictionary”, but might be useful for the adopters.
Assume that the leader node fails
How long the worker nodes will work without a connection to the leader node?
Assume that failed leader node also runs instances.
There’s no way for any of the worker nodes to “powerup” to Leader?
There’s no way fo use any other node of the cluster to replace the working instances on leader nodes?
Without leader node operative, does the working nodes keep runing backups as designed? (I’m assuming “yes, unless leader node is also the backup storage endpoint”)
Without leader node operative, is possible to initiate in any way any restore procedure? (assuming that leader node is not the backup storage endpoint)