A Debian 12 based new Install of NS8, running on a low powered Proxmox 8.22 (Odroid H3+)

Hello All

Here is a Testimonial (and HowTo) about installing a new NS8 VM based on the latest Debian 12.5 on a fairly low powered Proxmox hypervisor.


The basis for Proxmox is a small Odroid H3+ based box (Type7). Technical data / specs:

CPU: Quad Core Intel N6005 CPU 2 GHz
RAM: 64 GB
Storage: 2 TB Samsung NVME with Cooler
2 NICs with 2.5 GB/S

Storage is additionally available via dedicated 2.5 GB/S Storage LAN, but is NOT used in this use case.
Backup is available via NFS to an OpenMediaVault NAS, and PBS.

All local storage is configured as ZFS, with RAM usage limiting in place.
The system boots ZFS.

This system uses a max of 60W power!


Firewall / DNS / DHCP is done at my home site with OPNsense.
I created a DHCP reservation for my NS8 VM, even though it is configured as static on the VM itself.
There is also a FQDN entry for the DNS before the actual installation.

The NS8 will be using 172.25.90.21/24 in this use case.

DHCP reservation

DNS (Unbound) entry:

An Alias (CNAME) for the AD…

I’m including screenshots for my specific OPNsense, but they might help anyone interested using something else (NethSecurity, PFsense)…


The VM was created on Proxmox with the following settings:

The install image is the latest Debian 12.5 ISO (Netinstall).

I use Debians GUI installation, here using local language settings (German / Swiss-German).
I use BtrFS as File System, all on a single VM Disk (200 GB, can be enlarged later if needed).
I choose a minimal system, removing both pre-set GUI options above, and choosing only SSH server as option.


After installation of Debian 12, I remove the ISO image, update the system with

apt update

and install what I need:

apt install mc nano htop screen snmp snmpd curl

Set SSH to allow root login

nano /etc/ssh/sshd_conf

and set the line

PermitRootlogin yes

save & close

and run

systemctl enable ssh
systemctl restart ssh
systemctl status ssh

I then also set snmp

mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf_orig
nano /etc/snmp/snmpd.conf

Add in as needed:

rocommunity public
syscontact Admin <netmaster@anwi.ch>
syslocation ANWI Consulting R9

save & exit an set:

systemctl enable snmpd
systemctl restart snmpd
systemctl status snmpd

After a reboot, the Debian 12 is ready for NS8…


Due to a bug in the installation (Now officially fixed and distributed, so the “normal” method can be used as per instructions) I used the special command on the CLI (Via SSH).

This was part of testing, in coordination with @Tbaile / @davidep - many Thanks!

curl https://raw.githubusercontent.com/NethServer/ns8-core/main/core/install.sh > install.sh 
bash install.sh ghcr.io/nethserver/core:latest ghcr.io/nethserver/traefik:setting-backoff-on-init

Not a single glitch!

Logging in the cluster I set up what I needed (This is NOT a migration from NS7!).
It looks like this now:

I’m using AD and FileServer and intend on using mail, nextcloud and maybe others…

There is still a lot to do, additional modules / apps I’ld like to install / test, and also setting up NS8’s own Backup. Main Backup will be PBS.


I would like to add, in the same time I prepared a migration for a client, using the same hardware at his SoHo, an Odroid H3+ with identical specs as mine above.

This case is a migration from NS7, which is also running on this small hardware. There is even a Win10 workstation running, for remote access / work.

The actuall migration is planned to be finalized this weekend. The current status:

Proxmox and NS8 are both Multitasking capable, but so am I, I initialized an additional 4 NS7 migrations to NS8 last night! :slight_smile:


I hope to motivate people and other users with this small Testimonial / HowTo.
Especially for those interested on using Debian!
But also for those interested in using any of the other supported Rocky/Alma on such hardware for Home / Lab / SoHo use.

My 2 cents
Andy

15 Likes

Thank you Andy!

Now you need a special command to update the core to the next stable release.

I’ll write it here when they are available.

Edit:

In the Software Center page push the Refresh button, then run this command:

 api-cli run update-core --data '{"nodes":[1],"force":true}'

The force flag is needed to replace and update images with non-semver tags, like “latest”.

2 Likes

2 posts were split to a new topic: Change the cluster-admin UI language

Salut @Andy_Wismer,

You are giving me hope for NS8 !

As you recommended a few months ago, I have the exact same H3+ hardware.

The only problems I had were:

  • a bad 2.0 TB disk in the mirror.
  • hard to find problem, a bad memory at the high end of it.

I replaced the disk and memories and everything is now working perfectly.
From now on, I will use only high end industrial disks and memories.

After the installation of Proxmox VE, when it has nothing installed, I deleted only the local-zfs, keeping its folder, and recreated it using the exact same name and folder location so to be able to Rollback to an older Snapshot.

The only difference in NS8 is the QEMU Guest Agent. (Just in case I move the VM or only its disk)

image

image

I would like to see a screen capture of the vmbr0 and vmbr1 of your Proxmox VE host.
I guess the host doesn’t use vmbr0 so the OPNsense is able to use PPPOe for the connection to the ISP ?

Again, thank you so much for the H3+ recommendation, I never regret that choice.

Michel-André

3 Likes

Hi @michelandre

My home environment is not quite comparable to your home one.
First of all, my OPNsense is a hardware box, and also a quite powerful 8 NIC box.

My Proxmox, as for most of my clients, only handle local LAN / Storage / Cluster / Backup LANs.
I only have two clients using the OPNsense running on Proxmox.

Here are the NIC settings on my Odroid box:

This client may be more interesting for you, a small Home Office.
The identical Odroid H3+ is running NS7, NS8 a Win10 VM for RDP access, and Home Assistant, as a VM.

This is most likely what you’re looking for, a client using OPNsense as firewall, running as VM in Proxmox:

(And yes, this client happens to be in Canada and use the same provider as you do, but has additional static IPs passed over the PPPoE link. The Proxmox is a very powerful box!)

My 2 cents
Andy

2 Likes

Amazing testimonial @Andy_Wismer thanks always for your efforts and time.

2 Likes

I have a few NS8 Installations based on Debian 12.5 OS for testing and as a proxmox-template, how can I identify the bug you mentioned?

Hi @fausp

Here is the exact report, with logs.
It is toward the end of the basic installation, where Redis / Traefik is.

The issue is that Traefik repeatedly fails.

Hope this helps!

My 2 cents
Andy

I can remember the post… What I like to do is to check my “old” installations if they are OK.

Maybe with a command like: traefik healthcheck - or whatever…

Otherwise I have to install everything from scratch?

P.S. - Installation Output 2nd: 2nd Output

Hi @fausp

I can’t say. Maybe @davidep would be better to answer this…

My 2 cents
Andy

OK, tnx Andy!

The bug only affected fresh core installs and the current traefik 2.2.2 fixes the issue so updating your servers should be enough, no need to reinstall. See also Traefik write-hosts startup error · Issue #6912 · NethServer/dev · GitHub

3 Likes

Thank you for your report. I’ve 3 questions:

  1. How did you choose to use BTRFS during installation process on Debian. I didn’t remember such option.
  2. Do you also use SMB shares as external drives in NextCloud, as under NS7? How did you configure this?
  3. Did you automatically migrate NextCloud with SMB shares mounted as external drives during the migrations mentioned above? Including the users and groups from the AD and the correct permissions on the shares?

Sincerely, Marko

2 Likes

Hi @capote

Ext4 has fairly poor performance on large disks, but also with large files and plenty of small files.
XFS would have been my natural choice, but someone suggested BtrFS. I was sceptical at first, from first experiments about file level restore from PBS (Very Important). We tested it, in the meantime PBS CAN handle BtrFS easily.

The choice appears when you choose “single disk” (Everything on a single disk). It suggests Ext4, but there you can change it to whatever you want.


No 2: I did not specifically (yet), I just migrated…


No 3 also a yes.


Note that these Migrations are all still Work in Progress (WIP).

First step is doing the “raw” migration of data.
Second step is “finishing” the migration, stopping the AD on NS7 and using AD on NS8.
Third step (planned) is to correct SSL stuff, ports etc. ACL.


This weekend 4 migrations were planned, one of these 4 had to be postphoned due to hardware issues (UPS needs replacement!). Of the remaining three, two have already completed steps 1 and 2. The third is still in step 1, also due to the size. The NS7 has 1.4 TB of data…


During Step 3, NS7 becomes a “member” of NS8 AD, if it is still needed for specific migrations. All of these servers have Zabbix running, Guacamole is also used for these clients. DocuWiki is another. These modules all need additional work, automatic migration is not (yet) an option…

@mrmarkuz Zabbix tests are still planned! Delayed due to the Install-Bug (Traefik). :slight_smile:
I also plan to test the manual migration via DB.
Of the completed 2, one will get a new install of Zabbix (Like my Home server), the other will get a DB migration, also planned for the third migration still in progress.
Feedback / Questions for Zabbix later today!


I hope these answer your questions!

My 2 cents
Andy


PS:

I have done Tests with MacOS on Proxmox. I have some clients who need an Intel based Mac for certain tasks. Instead of buying a second hand Intel Mac on eBay or whereever, I tested the Application on a virtual Mac in Proxmox. The Application as such works without issues running on an Intel Mac virtualized in Proxmox. PBS works without issues for this VM.

I was specifically interested if a file / folder level restore from PBS is possible using such an environment…
It seems that even this would be possible, if the according used file system can be installed on Linux (Proxmox PVE and PBS).
AFAIK, HFS+ installs on most Linux, but AFP would have issues. It’s possible that even this has changed!
The application (Horos for DICOM stuff) is not too critical what file system is used on MacOS, so there is a very good chance this will work. I will do further tests and report about this.

VirtIO Hardware settings (YES !), also, this Proxmox is fully ZFS…:

and this indicates that Qemu-guest-agent seems to be in use (IPs shown!):

CarbonCopyCloner (CCC v6) works well to “clone” a real Mac to a VM!

→ This could be an interesting option for some of our Mac users here!

5 Likes

Thank for clarifying!

2 Likes

I did it from scratch and it worked in the same way as with NS7…

2 Likes

Nice job Andy, tnx for sharing the infos!

1 Like

To all (potential) Mac Users here:

I will post a How-To with tips & tricks about using a virtualized mac for testing environments, especially when using NS8…

Not everyone has a Mac available, much less a spare one for testing, just because a client also uses Macs. This is a good way to get some hands on experience - but also a good starter for anyone planning on getting a real mac!

Stay tuned…

My 2 cents
Andy

4 Likes

didn’t you forget qemu-guest-agent?