NS8 - public FQDN required?

Interesting…

Even if running on a single node, the system will setup a Virtual Private Network (VPN) for the cluster. With the VPN in place, you will be able to add more nodes in the future.

Please enter the following VPN details:

VPN endpoint address: it’s the public name of the leader node, it should be a valid (Fully Qualified Domain Named) FQDN with a public DNS record

But public name of leader node, with a public DNS record … is not reported among system requirements.

So if someone installs NS8 without a working FDQN published in public DNS servers… Won’t work?

I think it is true even for NethServer 7. For my laboratory at home I run the dns server on my firewall and I use to redirect to the local IP with wildcard

ejabberd.rocky9-pve.org redirect to xxx.xx.xxx.xxx (local IP)

nowadays you cannot do something without domain record, except if you run your own dns server or if you maintain your /etc/hosts

Sorry if I misunderstood the issue @pike

I think this is not.

I can create NethServer installation with a hostname, public or published is not required for install and use some of the modules/services.
Indeed, is necessary if anyone is going to use most services via public network (mattermost, emailserver, groupware, Nextcloud, so on) but it’s not necessary for OpenVPN, IPSec, SMB file sharing.

If a published/public hostname is required for NS8, IMVHO this should be part of system requirements.

I think it’d be more accurate to say, rather than “valid FQDN with public DNS record”, “address that will be available to any other expected member nodes.” If all the nodes will be on a private network, then private IP addresses, or hostnames that will only resolve locally, is fine. But if not, it should be a public FQDN.

What I don’t know is what happens if there’s only one node–does it matter that the “VPN endpoint address” be valid in that case? I’d expect not.

1 Like

you can create a NS8 without a trusted domaine or an existing domain name (just do a fake like I do in my laboratory), it is just a matter that your users won’t find you

I surely miss the point or it is probably obvious as well :smiley:

@stephdl currently NS8 is not interesting to me.
However, writing ot that part of the site might be misleading. Or at least, might be to me. And if missing the availability of a public FDQN might lead NS8 not working, maybe should be written into system requirements.

1 Like

Here’s how I think it works–@stephdl or others, please correct me if I’m wrong here: When you create a cluster, NS8 asks for the VPN endpoint address. This address is encoded into the “join information.” Therefore, for another node to join the cluster (at least through the GUI), that VPN endpoint address must be something that prospective other nodes can reach–that can be an IP address (local or public), a local-only hostname, or a public FQDN, as long as any other nodes can reach it. AFAIK, that’s the only use for that piece of information.

If you’re only going to use a single node, that field can contain anything–its contents don’t matter. I agree that an edit to clarify the docs would be good, but I think it’s this that ought to be edited:

1 Like

I think this is @pike’s point: if you need to have a public FQDN for your NS8 host (as the section in the docs I quoted above, and he quoted in an earlier post, says), then that should be listed among the system requirements. I don’t know that I’d necessarily agree; “system requirements” is ordinarily taken to refer to hardware and software, not necessarily other configuration settings, but I do see the point.

Otherwise, if a public FQDN isn’t required (which it isn’t), the installation page should be clarified.

1 Like

fot that point, you simply need that the two servers can reach each others by the domain name

the ns7 I have act as my firewall on the local network, this is the host declaration in my hosts database

rocky9-pve=local
    Description=rocky on pve
    IpAddress=192.168.12.110
    MacAddress=56:d4:0c:08:d5:65
rocky9-pve.org=remote
    Description=
    IpAddress=192.168.12.110
    WildcardMode=enabled
rocky9-pve2=local
    Description=rocky on pve
    IpAddress=192.168.12.111
    MacAddress=12:e0:98:36:d4:9e
rocky9-pve2.org=remote
    Description=
    IpAddress=192.168.12.111
    WildcardMode=enabled

since my cluster gets the ip and the dns from my firewall they know each others, moreover I use wildcard so everything like *.rocky9-pve.org is know by everybody on my network, I do not bother to maintain a /etc/host on my network

another manner could be to fill on each server of the cluster a /etc/hosts with all domain and IP where you can find the servers, but it is a pain in the ass :smiley:

May or may not.
Today I install a single node, which might grow and… become part (or master) of a multinode. Same subnet? Same domain? Different location? I don’t know.
If this parameter is set and cannot be further changed, could be a bad idea write “anything but works”…

Create a server and a container/container farm orchestrator seems to me quite a different order of considerations…

I am sad :smiley:
When something new is rising I think it is the better time

2 Likes

just do it with two server on my lan

1 Like

Yes it’s possible to configure a cluster with LAN-only nodes but there’s still a missing key feature to make it effective: NS8 custom certificate

Custom TLS certificate upload is scheduled for some future release: Trello

TLS certificate today works only if Acme HTTP challenge requirements are met, so public IP + DNS record.

As almost any service in NS8 requires a TLS certificate, it quickly becomes a requirement for the whole system. However, when samba file server module is released we’ll find the rule exception.

1 Like

May I repeat that this requirement should be added into system requirements page?

I will admit that I think this is completly unreasonable!

NS8 is still Alpha!

My 2 cents
Andy

2 Likes

You’re totally right.

If you’re only going to use a single node, that field can contain anything–its contents don’t matter. I agree that an edit to clarify the docs would be good, but I think it’s this that ought to be edited:

I’m not so good with English, but I’m open to suggestions to better explain the sentence!

1 Like

Which is a reason good enough to spread misleading information?
If the public FDQN is mandatory for some scenarios, IMHO is far better for the sysadmin to know that among the requirements of the system, so any test done during this phase, even one (might be unadvisable but possible) which have a geographically split cluster scenario, will be done conscious of what’s needed.

Maybe you won’t feel what I’m trying to explain, Andy, and even my lack of consideration of the current status of some part of the project are not enough to avoid providing conscious feedback, even if it’s been percepted as unpleasant.

It’s reason entirely good enough to explain docs being incomplete and incorrect, and it’s hardly “spread[ing] misleading information” to fail to mention among system requirements that a system that’s intended to be remotely accessed should have a public domain name. Is it also “spread[ing] misleading information” to fail to mention that you need to have an Internet connection? A network connection at all? Connection to mains power? Where does it end, and at what point can they reasonably figure that anyone who would admin such a system already knows this?

There are two distinct questions here:

  • Must a NS8 system have a public FQDN?
  • Depending on the answer to the first, how (if at all) should the docs be changed?

On the first question, it’s clear that a public FQDN isn’t absolutely necessary–it’s entirely possible to install and use NS8 with a local domain name, a local IP address, a public IP address, or even a completely meaningless value, for the “VPN endpoint address.” But what you enter there will affect your ability to connect other systems in a cluster–if you enter something completely meaningless, you won’t be able to join other nodes (at least through the GUI). If you enter a local IP address, you’ll only be able to join nodes that are on the same network. And so forth. I think this could stand to be better documented than it is.

But does it need to be listed among the system requirements? I don’t think so. If it isn’t obvious to you that you (effectively) need a public FQDN to access a server from a remote network, IMO you have no business even playing with any flavor of Nethserver–this is, again IMO, on the same level as the requirement for mains power and a working network connection. And it certainly isn’t “misleading information” to not mention such obvious requirements, particularly for software that’s very much in a pre-release status, and even more so when they aren’t required for all installations.

I’d suggest something like this:

VPN endpoint address: This is the address of the leader node of your cluster, and must be reachable by any other nodes you may add to your cluster. Local network names and IP addresses will prevent you from adding systems to your cluster which aren’t on the same network as the leader node.

It could probably still be tweaked a bit, but I think it gets the idea that the VPN endpoint address only needs to be reachable from other cluster nodes.

3 Likes

If the node is behind NAT, the VPN endpoint address (host+port) can also resolve to an IP that is not assigned to any interface of the node itself.

In this scenario public DNS name and the host name can be completely different.

1 Like

Thank you, documentation updated!

1 Like