Replacing Win Server 2016 with Nethserver?

I’m going to have to put in a plug for ZFS here. It’s stable on Linux, it checksums your data to avoid silent data corruption, it rebuilds much faster than traditional RAID (since it knows which blocks are in use), it’s easy to expand volumes (though there is a right and a wrong way to do it), and it works on pretty much every major OS except Windows. Running Neth from a ZFS root is doable, but takes a bit of hacking (see Neth on ZFS root filesystem?). But using ZFS for a storage volume should be very straightforward.

3 Likes

Sorry if I am a bit slow :thinking:, but would I be setting them to an external DNS like 8.8.8.8?

@vhinzsanchez Thanks for the explanations. The 4TB drives are WD Reds, so NAS grade. Speed ultimately isn’t a concern of mine as its a home server. They have proven to be reliable for me over the years which is more important to me.

If I write about my setup, should I start a new post or include it here?

Ive never had to use, as you put it, chipset/fakeraid but am aware its not really hardware RAID. I wasnt sure if I set it at the chipset/bios AND set it in mdadm or only 1 or the other. Also wasnt sure if there may be some sort of alternative thats more like the Windows software I currently use called Drivepool. The basic gist of which is: A) pool any number of drives of various sizes/types into a single drive, B) Duplication can be set on a per folder basis, C) Drives are just NTFS, they can be pulled out and put in the pool without destroying data.

Point A is my primary objective. B would be nice but I can “settle” for losing 1/4th of my storage space to redundancy. Point C, specifically about drives going in and out without destroying data would be the cherry on top but not really required.

SSM appears to be able to handle point A, unsure about point B. Would it be something like SSM using btrfs?

Do you think it would be viable to use both BackupPC and urbackup? use urbackup on Windows clients and then backup pc for my linux client?

Either way my implementation will only be a handful of computers, likely 5 clients tops (if I build a desktop/something else in the future).

I have considered this as a possibility, for my data pool anyway. I would keep the OS on my 120GB SSD with a more standard filesystem.

My problem and understanding however is that ZFS requires alot of RAM and its ideal to use ECC RAM. Is that not the case?

I currently have “only” 8GB RAM (non-ECC) and am not ken to put any more money into this box. I was under the impression that the preference with ZFS was something like roughly 1GB RAM to 1TB storage, so I would be about 50% short in that regard if true.

If I am completely incorrect that may be the answer then.

As urbackup works seperately only between urbackup-server and urbackup-client I think there should be no problem to use both on the same machine.
It’s your choice if you want to have 2 different systems.

2 Likes

ECC is recommended for any server, or any other application where data integrity is important, but there’s nothing about ZFS that makes it uniquely more important. Similarly, more RAM is always good, but you’d likely be fine with 8 GB.

1 Like

https://en.wikipedia.org/wiki/ZFS

Under “Deduplication”

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage.

Not being entirely familiar with ZFS, maybe this only applies under circumstances that are not typical?

Id just hate to invest the time setting everything up only to find out that my choice in storage tech used wont be feasible for me. I am however, and have been, intrigued by the prospect of ZFS.

Yes, that’s when using deduplication. You don’t typically want to use that, and it isn’t enabled by default.

1 Like

Ah ok, Thank you.

So It would be feasible to install Nethserver on the 120GB SSD using defaults (for partitions/filesystem) and then use ZFS to pool my 4x 4TB disks into a single drive/partition? If I went with ZFS I presume I would ignore using the RAID from my motherboards sata controller?

Someone correct me if I am wrong but I think at this rate there are 4 possible options I have for pooling my storage:

  • sata controller RAID 5
  • mdadm
  • SSM/btrfs (if i understand it correctly?)
  • ZFS

To narrow it down, which of the above would be the simplest to implement giving me a pool of storage and the ability to survive a single drive failure (or is there a better, yet unmentioned option)?

Should be.

Correct.

Oh, heavens, no. btrfs is broken by design, and parity RAID especially so.

ZFS probably wouldn’t be the simplest to implement, but it’s well-documented. You’d first need to install ZFS itself (can’t link to instructions here because stupid network filtering at work blocks the zfsonlinux.org site, but my other thread links to the CentOS page there). Once it’s installed, creating the pool is simply:

zpool create -m /path/to/mountpoint tank raidz1 sda sdb sdc

Breaking it down, this command creates a ZFS pool, passing the mountpoint option (so your pool will be mounted there, no need to mess with fstab), with the name of “tank” (apparently the ZFS developers, or documentation engineers, watched The Matrix a little too much, though you can use any name you like). The pool is set to RAIDZ1 or single-parity RAID (comparable to RAID5), and consists of sda, sdb, and sdc. That will run for a few seconds, create and mount your pool, and you’ll be in business.

Now, the FreeNAS folks (who tend to be rather fanatical about protecting their data) would want to point out a few things:

  • They really don’t like RAIDZ1 for disks larger than about 1 TB. Statistically, there’s a high chance of a data error during a pool rebuild, which could result in data corruption. This issue is not at all unique to ZFS; it’s common to any single-parity RAID. ZFS is better-protected than most filesystems, because all data and metadata are checksummed, and all metadata has at least two (and as many as six) copies. Nonetheless, with disk capacities and error rates as they are, the risk is there.
  • To confirm data integrity, and clean up any silent data corruption, you’d want to scrub your pool periodically (every couple of weeks or so). Set up a cron job to run zpool scrub tank to do that.
  • You’d want to set up regular SMART tests and SMART monitoring for your disks; that would be in /etc/smartd.conf.
2 Likes

All very helpful info, thanks!

What are your thoughts on the above regarding the ZFS mount point. Would I just straight set the mount as:

/var/lib/nethserver/ibay

As the mount point of the ZFS pool, would it effectively make shares created in the GUI on Nethserver be placed into the ZFS pool instead of the 120GB SSD?

With the caveat that I haven’t tried it, I’d expect it should do exactly that.

1 Like

Alright, well ty everyone!

All my data is backed up. When I get home today I am wiping the server, installing and configuring Nethserver and wiping a Win 10 laptop to connect to Nethserver.

I think I am going to start by trying ZFS for my 4x 4TB drives with the OS on the 120GB SSD.

Presuming it all goes according to plan, I will wipe and connect the other Win 10 laptop tomorrow.

I am then going to consider if I AD join my Manjaro laptop or not. I presume it should be feasible, as I have been doing, to simply connect it to the desired shares without connecting it to the AD/Domain? Basically I have it (Manjaro laptop) setup how I like and are not too keen on wiping it and then re-tweaking it if I can avoid it.

I will likely try both BackupPC and UrBackup and either use both or decide which works bets in my setup.

I am likely to use ClamAV on the server as per @m.traeumner recommendations as well as on my Manjaro laptop. For the Windows 10 machines I am considering purchasing a consumer solution from either Bitdefender or McAfee in place of my business server/endpoint current setup using Bitdefender.

I am then excited to poke around in Nethserver and find new modules, programs, etc to get the most out of it.

I would of course be happy to report back with the progress, would it be best to start a new thread for that or to simply continue it in this thread?

Again, thank you all. Its rare these days to find software with decent documentation. Even more rare is finding software with a friendly, helpful and knowledgeable community. Nethserver has nailed both.

1 Like

If you’ll be using something more than ibay shares, I think it would be better to do custom partitioning for /var/lib/nethserver (where nethserver saves most of the data) instead of /var/lib/nethserver/ibay

Excuse my ignorance on the matter but…

What do you mean by?:

I am not sure exactly what an “ibay” share is and/or what alternatives there are to it.

As for custom partitioning could you maybe elaborate on the details of how I should approach it?

To clarify (and please correct me if it does not fit in with the Nethserver paradigm) I am very much a proponent of separating the OS and Data.

The way I see it is Nethserver would be installed on and anything related to the OS functioning, updating, programs, etc would be on my 120GB SSD. The 4x 4TB pool would be used as space for data, some (likely most or all of which) would be made available to others via shares across the network.

So if there is a way to elegantly change the default location of the shares to my pool (either during or after install) then I am all for it. However, another member advised me to mount to the default location instead of trying to alter the location of shares as I didnt find any option in the web GUI to specify the location/path of a share.

Thanks

for ibay share I’m referring to shared folders.

For instance, if you install nextcloud its data will be stored under /var/lib/nethserver/nextcloud/ , that’s why I suggested to use the other mountpoint (to store the data on the bigger disks)

At installation time you can change the storage/partition scheme to assign the mountpoint to the bigger disks array/pool. Cannot talk about ZFS.

The location (path) won’t change, you just assign a different underlaying storage to your convenience.

1 Like

So made install USB stick using Rufus from Windows machine.

Booted from usb and got:
‘dracut-initqueue[630]: warning: dracut-initqueue timeout - starting timeout scripts’

Repeated many times then left at dracut emergency shell.

I suspect my best course of action is to remake but media and try again? Google didn’t uncover anything that stood out as a common solution.

Make the bootable usb drive with Etcher instead.
Some notes on Rufus: V7.3.1611 Installation doesn't work - #13 by dnutan

Thanks.

Rufus gave UEFI boot option for USB but had the issue mentioned. Both the SUSE studio imagewriter (that comes with Manjaro) and Etcher on Win 10 produced a usb drive with no option to boot UEFI.

I then tried mounting the iso, copying all the contents to a freshly formatted USB stick (fat32). It has the UEFI boot option but immediately gives a message stating:

“Invalid magic number”

I am going to now try using dd from my Manjaro laptop. After that I am out of ideas :confused:

Hi @Zer0Cool,

you may change to legacy boot in BIOS instead of using UEFI…

I use e2b for USB booting and it works pretty well. Make an e2b USB stick, copy an ISO on it and just boot it.

http://www.easy2boot.com/

I use it for NethServer, Proxmox, ESXi, Windows and more, boots nearly everything:

http://www.easy2boot.com/add-payload-files/list-of-tested-payload-files/