Set-fqdn from node dialog

So, I have two nodes, a leader and a worker. Leader is named ns8-1.example.intranet and worker is ns8-2.example.intranet.

From the nodes dialog is shows me both nodes, and then from the vertical ellipsis I see ‘Set FQDN’. Okay, for whatever reason it was showing me ns8-1 on both nodes. so I tried to set it to ns8-2 on the second node. This has been aggravating to say the least - it’s changed the node name on both nodes and keeps putting it back to ns8-2 on the first node no matter what I do to the /etc/hosts and /etc/hostname. Also, clicking on the ‘Set FQDN’ now yields an error - hostname -d exits with non-zero status.

So, I’m shutting down node-2, and going to power only node-1 for a few minutes while I try to set it’s FQDN. Will let you know how it goes.

okay, false alarm I guess. I closed all my browser tabs to the node-cluster and powered them both off. starting node-1 and then node-2 brought everything up correctly, and the nodes work with the right names now. Not screwing with the gui for that information anymore.

okay, out of curiosity, I clicked the Set FQDN button with the globe in the Nodes page for Node 1. It shows
Hostname: ns8-1
Domain: example.intranet

and exactly the same thing for node-2

Now, I have separated DNS servers running unbound with a zone file for example.intranet, and they resolve correctly. they resolve correctly on both nodes. So I can only think that the GUI itself is pulling the wrong information for the node-fqdn - they must all be getting the same fqdn. I’ll spin up a 3rd node just to see.

@laidback_01

Hi Jack

And welcome to the NethServer community.

NS8 is still not released, so a “bug” may be a little strong, considering you’ve not even mentionned WHAT exactly did you install.

Now, simply from the fact that I doubt you have 3 spare boxes lying around, I’ll assume you’re running this as VMs on a Hypervisor.

Second probable fact: Out of ease or whatever, you used the VM Images from the download site.

Now, these VMs aren’t exactly dev priority - at least until release is reached! These do have bugs and aren’t the latest and greatest “automatic builds”.
To really test NS8, I’d suggest setting up NS8 from scratch. I prefer Debian personally, but that’s your choice. Spin up a VM, then install NS8, including updates as per instruction.

I think you’ll find MUCH better results!

My 2 cents
Andy

TLDR;
I installed on three Debian 12.2 VMs via this URI:Installation — NS8 documentation

Allright, I’ll play.
these are Debian 12.2 instances. One resides in a TrueNAS server as VM, the other two are on a 4-node proxmox installation that has a 288T 72 OSD CEPH cluster.

Each of these installs has a thin provisioned 5T (ZFS volume on the TrueNAS machine) RBD volume. They also have 6G and 4 procs.

The machines are clean installs, they have wireguard installed on them first - I setup wg1 because Neth overwrite wg0 with their own virtual nic. two of these VMs are on the proxmox installation, the truenas is in a separate building connected via 10G network. Regardless, that other building has a separate vlan,so I’m using wireguard to get them all on the same network. I setup two unbound VMs - one is on the TrueNAS as a jail, it’s a simple FreeBSD 13.2 system and runs only unbound with a zone file and does it’s lookups to the the root servers.
the other one is on my backup environment for the TrueNAS - a FreeBSD 14 system, only i just installed unbound on root for that because it matters little.

All Servers, 3 NS8 machines, the TrueNAS, the FreeBSD backup, the unbound cache servers are all on a separate wireguard network.

Because of how Neth installs, I’ve edited the firewall on the NS8 debian machiens to forward all traffic on wg1 to the local nic which is listening for 53, 139, 443, 445, etc. this happens on all 3 NS8s now so that traffic can be pointed at any of their apparent nics and correct responses attained.

I’ve joined the TrueNAS to the Samba provided domain and it’s working as expected, groups, etc. I’m setting up a FreeBSD based NextCloud and that’ll join in shortly as well.

my apologies for focusing on the problem rather than the infrastructure. I have a non-standard environment and wanted to see how Neth plays. I’m comparing it to FreeIPA (where I have 4 nodes installed) and a single-node Univention Corporate Server. So far it’s pretty good.

FYI, this is just weekend play time. this is not a production environment.

I tried this all on Rocky Linux yesterday and because of this odd Set FQDN behavior, I changed to Debian since linux is … well linux, and these are all just dockers, I wanted to see the behavior appear the same between the two installs.

you have a good day.

@laidback_01

I do happen to be the local Proxmox “guru” here, at least others have called me such. :slight_smile:
I have about 30 clients all running Proxmox as Hypervisor, and almost all using “Shared Storage” for clustering and fast migration.

Impressive PVE setup, you’re the first I meet running a CEPH cluster!

I am also a longtime TrueNAS 7 FreeNAS user, however, due to known issues TrueNAS has as Hypervisor (Due to it’s BSD underground) I’ve stopped “testing” TrueNAS as Hypervisor and use it only as NAS. I am also an OPNsense user, so I’m quite familiar with all BSD variants. I also use virtual and real OpenMediaVault as NAS, but my clients (30+) use mostly Synology NAS (6 or 8 Bays) as Storage.

As you’re aware of, NS8 is still in pre-release status. As such, Bugs are possible and do exist.

-----.

Part 2 (!)

Great!

Fine!

Fine too!

But this is against the requirements of a “clean” install!!!

And then, shudder a firewall running on a “clean” install?

And it’s not clean in the sense that Wireguard is a major foundation of NS8, yet you’ve preopted the VMs to run a Wireguard VPN.

Your Reason? To connect the TrueNAS with the Proxmox based CEPH Cluster.

Why not do it the logical way, and keep the test nodes together? I’d have set up all three on the Proxmox.

You can couple two networks, eg using a vLAN or VPN on both Proxmox and TrueNAS, depending what your network allows.

My personal suggestion:

Start anew without anything on the VMs, really clean installs, this (Your WG install) is a MAJOR issue for NS8! No firewalls in the OS, and especially not on the Proxmox foundation. This test sandpit is unreachable from the Internet as it is!

Your Proxmox environmet is large enough it seems, keep the Test environment there, you can even create a dedicated bridge to isolate it.

Don’t introduce TrueNAS in a mixed environment with Proxmox, they are not par, nor equal. TrueNAS has known issues with Hypervisor, these are mostly due to CPU handling in BSD.

Another personal bit of advice:
Never join a VM Storage to an AD provided as VM.
Some NAS won’t allow even NFS access (NFS uses, in v3 absolutly no user/group authentification, only per host) for Backups, when connected to AD.
A friend ran into this issue when trying to do a Backup to NAS on VMWare ESXi… :slight_smile:

You can PM me, if you have specific questions.

My 2 cents
Andy

And then, shudder a firewall running on a “clean” install?

what? it’s the firewall provided by Nethserver. I’m just adding a zone and using it.

Your Reason? To connect the TrueNAS with the Proxmox based CEPH Cluster (No word on how many Proxmox PVE nodes actually running here…).

There’s 4 nodes. And this is practice for a real-world application. Wireguard encapsulated in Wireguard is fine, i’ve been doing it for some time.

the real world situation will be a 15T TrueNAS machine in a location 38 miles away from campus. This machine will be behind a Starlink with a private IP (CG-NAT), and using the stock Starlink everything. Not changing that, it’s not my setup. So, I’ll wireguard connect that back to campus and run the leader node on campus, a worker on the TrueNAS, a DNS jail on the TrueNAS, a Backup VM for that TrueNAS will be on the Proxmox, and ubound will also run on there providing DNS. I’ll change example.intranet to the ‘real’ local domain when it’s all installed. Not sure I’ll use a 3rd node of NS8 as I really don’t need it, just screwing with it to see what the NextCloud docker is like. pretty sure it’s going to suck compared to a FreeBSD based NextCloud install the way I’m used to.

Never join a VM Storage to an AD provided as VM.
Some NAS won’t allow even NFS access (NFS uses, in v3 absolutly no user/group authentification, only per host) for Backups, when connected to AD.

this varies by install I think. FreeBSD has always had the option of host<->jail and host<->VM communication without issue. I’ve tried on TrueNAS Scale and went back to core quite quickly due to significant expectation failure.

Don’t introduce TrueNAS in a mixed environment with Proxmox, they are not par, nor equal. TrueNAS has known issues with Hypervisor, these are mostly due to CPU handling in BSD.

I’ll take it under advisement, and I do agree that it’s not equal to proxmox. However all things considered, I’d rather use Bhhyve backed VMs than KVM backed VMs. wish I could find a nice system that used FreeBSD or OpenBSD with it’s Bhyve virtualization over something KVM backed.

For what it’s worth, I don’t use TrueNAS for much - it’s just handling data, and that’s all it’s going to be doing at site. it has one VM and a jail that really doesn’t do much.
In the same way I tread TrueNAS, I will likely never actually use the applications provided by NethServer. I never used a single app provided by TrueNAS or UCS. I’ll try the NextCloud provided by Neth, but fully believe that will be time wasted.

I take it this thread is done. have a good day.

Agreed, WG is by far the most efficient VPN. Around 50% more efficient than IPsec or OpenVPN in VPN throughput.
Still has issues when failover is used, both Provider Failover and Firewall Failover with CARP can create Issues for WG Tunnels, site2site and Road Warrior.

However, just the fact WG is on the VM can screw up the Node, it seems, it’s not only WG0. The workaround is by far not upgrade safe!

I did say “some NAS”, as with HyperVisors, not all NAS were created equal! :slight_smile:

I do run more or less the ull application stock of NethServer for my 30+ clients, and all are very happy with it. Especially with NextCloud!
I hope NS8 will be just as rock solid as NS7 was, but I do trust our Devs!

Have a good day too!

My 2 cents
Andy

I stood up two totally separate debian boxes. one is on an older SuperMicro 1U with 16G ram and 4 sata 1T drives. the system is build with the hardware raid handling the drives because setting up Debian with mdraid is too much effort for this. The other box is an older Dell T320 with 32G ram and 2 drives. Dell perc handles the mirror because of the same (I’m lazy) issue.

anyway, these are plain-jane installs, nothing but Debian 12.2 and then the install cmd:

curl https://raw.githubusercontent.com/NethServer/ns8-core/ns8-stable/core/install.sh | bash

I created the Dell as the leader and joined the Supermicro to it. The vertical ellipses menu under Nodes page, click on Set FQDN

Make sure you are running the debugger in your browser.

you’ll see it’s pulling

node/1/task/b181c995-65b8-45a8-a70a-6d26dc5fb46d … action: “get-fqdn” from queue: “node/1/tasks”

EVERY time from EVERY menu no matter if I click on Node 1 or Node 2.

this is a GUI bug, not a User-screwed-up bug. This is also easily repeatable. I’ve found this bug in both Rocky and Debian. And it’s not at all critical, I really don’t care, just want to make it clear, this wasn’t me doing it wrong…

1 Like