Advice on setup of home server

Hi all,

I need some advice how to setup my home server. Services that are needed with external access are nextcloud, S3 (Minio), and a mail server. For storage, I have a zfs-based NAS with ~30 TB storage capacity.

Currently, I have seperate VMs for these services that run as native Debian machines. Since both nextcloud and minio use several TBs of data each, my current setup just mounts a NFS share from the NAS to the respective server VM, and application data is directly stored on the NFS mount. By that, I get zfs snapshots and replication to backup the user data on the NAS.

How can I replace such a setup with nethserver VM for the internet-facing services? Reason is that I want to simplify maintenance of the setup. I feel that going for NS7 does not make sense since it will be replaced by NS8 soon. In NS8, however, mounting storage for the services via NFS seems to be unsupported.

While on nextcloud I could work around this with external storage, I do not see any way how to store MinIO or email data on my NAS storage. I do not want to provision a ~20TB virtual disk to hold all that data in NS8 directly, as it will break my backup concept (which works via zfs snapshots/replication on the NAS).

Any ideas how this could be solved?
Thank you!

Hi @jaywalker

So in your imagination, ZFS snapshots, made on the same storage your data is on (live), is to be considered “backups”?

For one thing, snapshots are dependant on the real data being available and consistant.

Second, even in the days of good old Dos, one learned not to put backups on the same disks as your data.

The NAS controller fails, or the ZFS mirror breaks, or plenty of other options - and you’re screwed!

Learn real, best practices on networking and backups, and test them!

If you want to test your backup concept, you need to be able to pull the plug on your data storage, and restore your data without your existing storage. That’s a backup to save your butt!

My 2 cents

1 Like

Hi Andy,

thank you for your reply. However, I think you got me wrong. I wrote “snapshot and replication” for a reason: I use both. Snapshots on the NAS, and then zfs send/receive to create a backup on another machine, including the snapshots. This gives me more than a year of history on two machines, with redundant storage on both. Additionally I do an offsite backup with borg, and test restore from the backup from time to time. I think this a sufficient to have my data safe. :slight_smile:

I did not put these details in my initial post, because I do not want to discuss my backup strategy, but rather want to learn how I can simplify the setup on the server providing the actual services. Any advice here?

Thank you!


Then it’s not too bad, but could be better!

This depends strongly on what you’re using as Hypervisor!

I use Proxmox as my favorite Hypervisor, for myself and my about 30 odd clients.
Additionally, I use PBS as local backup and a second PBS for offsite backups.

This gives me very fast live backups (incremental), with full file access.
Additionally, I have the advantage of Deduplication…

One Example:

A client has, among other VMs, his main NethServer as VM on Proxmox. This is also file server, and has 1.2 TB data.

A backup to NAS takes about 2.5 hours. A full backup to PBS takes - the first time slightly less (faster!) - future backups take about 4-5 minutes

The PBS has a mirrrored ZFS as Storage, made of 2x4 TB rotating disks, for a total of about 3.7 TB.

At present, I have 22 versions (daily, weekly & monthly) of this VM alone (and all other 10 vms) and still the PBS disks aren’t full! Neither is the offsite backup.

Does your strategy come close’ :slight_smile:

My 2 cents

Hi Andy,

thank you, that sounds interesting! I understand you run nethserver in a VM under proxmox, and provision enough storage such that all user data can be stored in the VM. Then, with PBS, you get efficient incremental backups of the whole nethserver VM.
I have two questions to this setup:

  1. When you just need to restore a single file from the backup (e.g. because a user accidentally deleted it), can you do so easily, or do you have to restore the whole nethserver VM to access older versions of files? I.e., can you mount and access the files inside the backed-up versions of the VM image?
  2. How strong is the dependency / vendor lock-in to Proxmox? What happens if, e.g. they go out of business, change their license model, …?

Thank you,


Proxmoxn uses the same license system as NethServer:

  • You can run it completly free.
  • You can purchase a maintenence contract, for proxmox this costs (The cheapest version) 100€ / CPU-Socket / year.

This gives you access to “tested” updates, with very good and fairly simple licensing conditions.

My clients run all licensed versions, but for home / lab / friends I also use the free version.

The paid version needs about 4x less reboots due to Kernel upgrades than he free version!

No lock-in, you’re free to download a compilable version any time you want.
Proxmox is a very big contributer to all projects, especially ZFS and CEPH…

It’s basically a Debian OS, but a better ZFS oriented Boot option!

So no “blobs”!

I’ve switched over from using VMWare (Since before 2000 a VMWare user!) about 8 years ago, and NEVER regretted it.
So much better hw support (Anything Debian will accept!) very good ZFS and CEPH support built in, and amazing (also free or paid) backups with PBS.

Restoring a file or folder is VERY simple and flexible.
Choose the backup-version you want
Navigate to the folder (like a browsable file system)
Choose the file or folder to restore.
You can then download the whole folder or file as ZIP to your workstation, unzip & check, and then copy it over to where it is needed!

My 2 cents


Both Proxmox PVE and PBS can run well on one of these:

Think 64 GB RAM, 1 120 or 250 GB NVME for system, and 2x 1 or 2 TB SSDs in ZFS mirror for VMs…
For PBS that would be 2x 4, 6 or 8 TB rotating disks, eg Seagate Iron Wolf or even Exos!

Saves power, heat, price and space! :slight_smile:

This might give you some ideas: