Replacing Win Server 2016 with Nethserver?

I am also realizing I am not quite sure how I should approach my storage either.

The 120GB SSD as the OS drive is easy enough, but I am unsure how I should handle my 4x 4TB data drives, IE; if I should do raid 5 via the sata controller, some sort of software raid from within Nethserver (mdadm I have heard of but never used) or an alternative program to pool the drives (that offers some form of parity)

Any help would be great, Thanks. Googling around I have found mention of SSM which seems involved but if its the way to go ill do it. Havent found much else on the matter.

As I was mentioned by @dnutan my few words:

I use urbackup now for about 1,5 years for about 12 windows-clients in a small network. It works stayable and is easy to handle. The installation is still on NS 6.9 so it’s urback 1.4.14 I tried backpc with the win-clients and rsync, but was not so happy. But urbackup is really win focused. You have to install the client-sotware on all machines to backup. For 15 clients not so hard, for some more it would be a lot of work. But with NS 7 and AD it should be possible to deploy it by GPO. Client configuration is in GUI on Server. Restore of folders or single files are easy. SO all in all I’m happy with it. But I’m a lazy guy, so I didn’t try to much diffrerent soutions… :wink:

mdadm is software raid, as you mentioned, and offers raid 0, 1, 5, 6 and 10, so it offers parity and failure tolerance. I use it on this installation with 4 smaller disks, all active, no spare. It’s stayable, but won’t give the performance of hardware raid. mdadm is easy to handle IMO and well documented. I found a lot of howtos and could solve all my problems so far with howtos from internet.
Here is a little tool to calculate raid levels for storage and failure tolerance.
http://www.raid-calculator.com/default.aspx

2 Likes

You can install the DNS role at softwarecenter, your clients should have nethserver as DNS. At nethserver configuration page go to network, then open the DNS Servers tab and set the entries for external primary and secondary DNS.
You don’t need to use DHCP for this.

2 Likes

Great! Looking forward to reading nextweek on how your weekend went. Experience is a great teacher.

DNS and DHCP are 2 different things (as you may already know) but they are and can be easily integrated especially in MS world. As DNS server, it holds all records of internal network, forwards external queries (https://www.petri.com/best-practices-for-dns-forwarding) and retains cache for a certain amount of time.

Be careful on what RAID you are going to use. If you speak of the hardware RAID, it usually is an additional card or module. It is, as they say, the most reliable one. mdadm is the most popular (actually, I don’t know any other software RAID…well, aside from Windows Storage Spaces if you consider it software RAID) software RAID. From what I’ve read and in my experience, it is reliable.

Now be careful on the RAID which motherboards claim to include. Others may call it fakeraid or chipset raid. It may seem hardware RAID as it is directly integrated with the motherboard, but in reality, is a software RAID handled by BIOS (BIOS stores config and software RAID reads it when it loads with the OS…or something like that to that extent). You can read more on https://mangolassi.it/topic/6068/what-is-fakeraid.

Also, what are your 4x4TB drives? Hopefully it is enterprise-grade if you are to use it with production data. Also, it might help if it is over 10k RPM. Read somewhere that 7200 RPM is not good for RAID5, better use RAID10 if this will be the case.

@alefattorini, its a pleasure to extend my help or share my knowledge, its kind of give and take. I always like to ambiance here. Just lost some time to check and respond, but still reading once in a while though, specially your monthly email updates.

2 Likes

New one is coming next week :slight_smile:

1 Like

I’m going to have to put in a plug for ZFS here. It’s stable on Linux, it checksums your data to avoid silent data corruption, it rebuilds much faster than traditional RAID (since it knows which blocks are in use), it’s easy to expand volumes (though there is a right and a wrong way to do it), and it works on pretty much every major OS except Windows. Running Neth from a ZFS root is doable, but takes a bit of hacking (see Neth on ZFS root filesystem?). But using ZFS for a storage volume should be very straightforward.

3 Likes

Sorry if I am a bit slow :thinking:, but would I be setting them to an external DNS like 8.8.8.8?

@vhinzsanchez Thanks for the explanations. The 4TB drives are WD Reds, so NAS grade. Speed ultimately isn’t a concern of mine as its a home server. They have proven to be reliable for me over the years which is more important to me.

If I write about my setup, should I start a new post or include it here?

Ive never had to use, as you put it, chipset/fakeraid but am aware its not really hardware RAID. I wasnt sure if I set it at the chipset/bios AND set it in mdadm or only 1 or the other. Also wasnt sure if there may be some sort of alternative thats more like the Windows software I currently use called Drivepool. The basic gist of which is: A) pool any number of drives of various sizes/types into a single drive, B) Duplication can be set on a per folder basis, C) Drives are just NTFS, they can be pulled out and put in the pool without destroying data.

Point A is my primary objective. B would be nice but I can “settle” for losing 1/4th of my storage space to redundancy. Point C, specifically about drives going in and out without destroying data would be the cherry on top but not really required.

SSM appears to be able to handle point A, unsure about point B. Would it be something like SSM using btrfs?

Do you think it would be viable to use both BackupPC and urbackup? use urbackup on Windows clients and then backup pc for my linux client?

Either way my implementation will only be a handful of computers, likely 5 clients tops (if I build a desktop/something else in the future).

I have considered this as a possibility, for my data pool anyway. I would keep the OS on my 120GB SSD with a more standard filesystem.

My problem and understanding however is that ZFS requires alot of RAM and its ideal to use ECC RAM. Is that not the case?

I currently have “only” 8GB RAM (non-ECC) and am not ken to put any more money into this box. I was under the impression that the preference with ZFS was something like roughly 1GB RAM to 1TB storage, so I would be about 50% short in that regard if true.

If I am completely incorrect that may be the answer then.

As urbackup works seperately only between urbackup-server and urbackup-client I think there should be no problem to use both on the same machine.
It’s your choice if you want to have 2 different systems.

2 Likes

ECC is recommended for any server, or any other application where data integrity is important, but there’s nothing about ZFS that makes it uniquely more important. Similarly, more RAM is always good, but you’d likely be fine with 8 GB.

1 Like

https://en.wikipedia.org/wiki/ZFS

Under “Deduplication”

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage.

Not being entirely familiar with ZFS, maybe this only applies under circumstances that are not typical?

Id just hate to invest the time setting everything up only to find out that my choice in storage tech used wont be feasible for me. I am however, and have been, intrigued by the prospect of ZFS.

Yes, that’s when using deduplication. You don’t typically want to use that, and it isn’t enabled by default.

1 Like

Ah ok, Thank you.

So It would be feasible to install Nethserver on the 120GB SSD using defaults (for partitions/filesystem) and then use ZFS to pool my 4x 4TB disks into a single drive/partition? If I went with ZFS I presume I would ignore using the RAID from my motherboards sata controller?

Someone correct me if I am wrong but I think at this rate there are 4 possible options I have for pooling my storage:

  • sata controller RAID 5
  • mdadm
  • SSM/btrfs (if i understand it correctly?)
  • ZFS

To narrow it down, which of the above would be the simplest to implement giving me a pool of storage and the ability to survive a single drive failure (or is there a better, yet unmentioned option)?

Should be.

Correct.

Oh, heavens, no. btrfs is broken by design, and parity RAID especially so.

ZFS probably wouldn’t be the simplest to implement, but it’s well-documented. You’d first need to install ZFS itself (can’t link to instructions here because stupid network filtering at work blocks the zfsonlinux.org site, but my other thread links to the CentOS page there). Once it’s installed, creating the pool is simply:

zpool create -m /path/to/mountpoint tank raidz1 sda sdb sdc

Breaking it down, this command creates a ZFS pool, passing the mountpoint option (so your pool will be mounted there, no need to mess with fstab), with the name of “tank” (apparently the ZFS developers, or documentation engineers, watched The Matrix a little too much, though you can use any name you like). The pool is set to RAIDZ1 or single-parity RAID (comparable to RAID5), and consists of sda, sdb, and sdc. That will run for a few seconds, create and mount your pool, and you’ll be in business.

Now, the FreeNAS folks (who tend to be rather fanatical about protecting their data) would want to point out a few things:

  • They really don’t like RAIDZ1 for disks larger than about 1 TB. Statistically, there’s a high chance of a data error during a pool rebuild, which could result in data corruption. This issue is not at all unique to ZFS; it’s common to any single-parity RAID. ZFS is better-protected than most filesystems, because all data and metadata are checksummed, and all metadata has at least two (and as many as six) copies. Nonetheless, with disk capacities and error rates as they are, the risk is there.
  • To confirm data integrity, and clean up any silent data corruption, you’d want to scrub your pool periodically (every couple of weeks or so). Set up a cron job to run zpool scrub tank to do that.
  • You’d want to set up regular SMART tests and SMART monitoring for your disks; that would be in /etc/smartd.conf.
2 Likes

All very helpful info, thanks!

What are your thoughts on the above regarding the ZFS mount point. Would I just straight set the mount as:

/var/lib/nethserver/ibay

As the mount point of the ZFS pool, would it effectively make shares created in the GUI on Nethserver be placed into the ZFS pool instead of the 120GB SSD?

With the caveat that I haven’t tried it, I’d expect it should do exactly that.

1 Like

Alright, well ty everyone!

All my data is backed up. When I get home today I am wiping the server, installing and configuring Nethserver and wiping a Win 10 laptop to connect to Nethserver.

I think I am going to start by trying ZFS for my 4x 4TB drives with the OS on the 120GB SSD.

Presuming it all goes according to plan, I will wipe and connect the other Win 10 laptop tomorrow.

I am then going to consider if I AD join my Manjaro laptop or not. I presume it should be feasible, as I have been doing, to simply connect it to the desired shares without connecting it to the AD/Domain? Basically I have it (Manjaro laptop) setup how I like and are not too keen on wiping it and then re-tweaking it if I can avoid it.

I will likely try both BackupPC and UrBackup and either use both or decide which works bets in my setup.

I am likely to use ClamAV on the server as per @m.traeumner recommendations as well as on my Manjaro laptop. For the Windows 10 machines I am considering purchasing a consumer solution from either Bitdefender or McAfee in place of my business server/endpoint current setup using Bitdefender.

I am then excited to poke around in Nethserver and find new modules, programs, etc to get the most out of it.

I would of course be happy to report back with the progress, would it be best to start a new thread for that or to simply continue it in this thread?

Again, thank you all. Its rare these days to find software with decent documentation. Even more rare is finding software with a friendly, helpful and knowledgeable community. Nethserver has nailed both.

1 Like

If you’ll be using something more than ibay shares, I think it would be better to do custom partitioning for /var/lib/nethserver (where nethserver saves most of the data) instead of /var/lib/nethserver/ibay

Excuse my ignorance on the matter but…

What do you mean by?:

I am not sure exactly what an “ibay” share is and/or what alternatives there are to it.

As for custom partitioning could you maybe elaborate on the details of how I should approach it?

To clarify (and please correct me if it does not fit in with the Nethserver paradigm) I am very much a proponent of separating the OS and Data.

The way I see it is Nethserver would be installed on and anything related to the OS functioning, updating, programs, etc would be on my 120GB SSD. The 4x 4TB pool would be used as space for data, some (likely most or all of which) would be made available to others via shares across the network.

So if there is a way to elegantly change the default location of the shares to my pool (either during or after install) then I am all for it. However, another member advised me to mount to the default location instead of trying to alter the location of shares as I didnt find any option in the web GUI to specify the location/path of a share.

Thanks