Replacing Win Server 2016 with Nethserver?

Welcome back Vhinz, happy to see you around again :slight_smile: Thanks for your always wise advice.
Hi @Zer0Cool you’re welcome too :slight_smile:

2 Likes

Thanks all for the input. I am hoping to be able to focus on this now this weekend. Possibly even on my physical server. Hoping I can make the time and that the 8tb external HDD I ordered arrives before the weekend to backup to.

At that point I can better reply to all your input.

In brief I did install the DNS role. I however wasnt sure and didn’t see any documentation to detail how to make the server the DNS server instead of using something like Google’s DNS (8.8.8.8).

I think I had DHCP installed but not active/setup.

I think having both the Nethserver and win10 client in VM’s added a layer of complexity to the networking I never got past.

I figure if I can backup my data then I can just use physical machines instead of waiting time troubleshooting VM’s that are only for testing.

I currently have my Manjaro machine cron a script I wrote to rsync backups to my server, so I could likely use that. Just was hoping there may be an easy, cross platform client backup solution to simplify things. Using folder redirection/roaming profiles may be an option too.

Also, is it typical for people to use any antivirus on Nethserver? I don’t mind a paid solution if it’s a good consideration for security. Currently use Bitdefenders business server/end point product, and clamav on Manjaro. Is it safe to assume any product compatible with red hat/centos is compatible with Nethserver?

Thanks

For client backups you can find BackupPC and UrBackup modules for NerhServer.

2 Likes

For some modules like mail and proxy, not for file-sharing, but you can have a clamav scanner for that, which searches once a day. Please read the following threads, the first discribes that’s better not doing a permanent scan and the second is about a module to configure scanning once a day, a week and so on.

1 Like

Do you have a preference as to which is easier to use/most reliable?

@m.traeumner Ill check that info out. I guess I can stick to ClamAV on server and Manajro and get retail/client licenses for AV on my Windows 10 machines instead of endpoint protection.

As far as storage goes, I am just using onboard sata from my ASRock Z87e-ITX controller. ASRock site says its RAID 5 capable, but I have my suspicions that it may only apply to software raid in Windows.

If RAID 5 isnt an option at the controller/software raid, is there any solution that will allow me to pool my 4x 4TB drives and allow for either selectively implementing parity/duplication (per folder) or imitating RAID 5 (or providing better space efficiency)?

In other words under RAID 5 my 16TB would actually be 12TB usable, if RAID 5 isnt an option is there any solution that may provide at least the same or better usable space while still giving me the parity/duplication benefits to survive a single drive failure?

note: I know I am bombarding you guys with a ton of “newb” questions. Its not that I refuse to read up and educate myself, I fully plan to, it will just take me time to get up to speed. I do however find the input of experienced users to be invaluable and often more helpful than generic documentation.

Thank you for your help and patience.

Not really. Never tried BackupPC, and haven’t used urBackup since ages (only on Windows).
I think to recall you’d to fine tune urBackup network settings as the first backups took long time and many network resources (depending on the type of backup). It is client-server backup model, whereas BackupPC doesn’t need to install anything on the client, if I’m not wrong.

I think @Hunv, @flatspin and @stephdl can give more reliable opinions. Also take a look at:

1 Like

urbackup is client software based, really focused on the windows world, even if you could backup linux client
backupPC is cool, no need of additional software to backup, except if you want to use rsync on windows.

I do prefer backupPC, waiting the v’ on epel to release it, actually I have it running on my server

Thanks, but not sure what you mean here. After reading some of the prior info about both option I think I am leaning towards your recommendation of BackupPC.

I am currently prepping to install Nethserver this weekend by backing up all my data. Im gonna wipe the server and install Nethserver. I am going to wipe 1 of my Windows 10 machines and fresh install 10 and then configure it all and get it working. Once all is good Ill wipe/clean install Win 10 on my other laptop and then approach setting up my Manjaro laptop as well.

Likely spend Saturday morning into afternoon reading Nethserver documentation/forums.

Still could use some clarification on how exactly I make Nethserver act as the DNS server to the other machines on the network. Does being the DNS server require being the DHCP server as well?

Is there a place I set Nethserver to be the DNS server or do I set it to look at something like 8.8.8.8 and then on clients set them to look at my Nethserver IP for DNS (and set the entry in Nethservers DNS page under hosts)

Thanks

I am also realizing I am not quite sure how I should approach my storage either.

The 120GB SSD as the OS drive is easy enough, but I am unsure how I should handle my 4x 4TB data drives, IE; if I should do raid 5 via the sata controller, some sort of software raid from within Nethserver (mdadm I have heard of but never used) or an alternative program to pool the drives (that offers some form of parity)

Any help would be great, Thanks. Googling around I have found mention of SSM which seems involved but if its the way to go ill do it. Havent found much else on the matter.

As I was mentioned by @dnutan my few words:

I use urbackup now for about 1,5 years for about 12 windows-clients in a small network. It works stayable and is easy to handle. The installation is still on NS 6.9 so it’s urback 1.4.14 I tried backpc with the win-clients and rsync, but was not so happy. But urbackup is really win focused. You have to install the client-sotware on all machines to backup. For 15 clients not so hard, for some more it would be a lot of work. But with NS 7 and AD it should be possible to deploy it by GPO. Client configuration is in GUI on Server. Restore of folders or single files are easy. SO all in all I’m happy with it. But I’m a lazy guy, so I didn’t try to much diffrerent soutions… :wink:

mdadm is software raid, as you mentioned, and offers raid 0, 1, 5, 6 and 10, so it offers parity and failure tolerance. I use it on this installation with 4 smaller disks, all active, no spare. It’s stayable, but won’t give the performance of hardware raid. mdadm is easy to handle IMO and well documented. I found a lot of howtos and could solve all my problems so far with howtos from internet.
Here is a little tool to calculate raid levels for storage and failure tolerance.
http://www.raid-calculator.com/default.aspx

2 Likes

You can install the DNS role at softwarecenter, your clients should have nethserver as DNS. At nethserver configuration page go to network, then open the DNS Servers tab and set the entries for external primary and secondary DNS.
You don’t need to use DHCP for this.

2 Likes

Great! Looking forward to reading nextweek on how your weekend went. Experience is a great teacher.

DNS and DHCP are 2 different things (as you may already know) but they are and can be easily integrated especially in MS world. As DNS server, it holds all records of internal network, forwards external queries (https://www.petri.com/best-practices-for-dns-forwarding) and retains cache for a certain amount of time.

Be careful on what RAID you are going to use. If you speak of the hardware RAID, it usually is an additional card or module. It is, as they say, the most reliable one. mdadm is the most popular (actually, I don’t know any other software RAID…well, aside from Windows Storage Spaces if you consider it software RAID) software RAID. From what I’ve read and in my experience, it is reliable.

Now be careful on the RAID which motherboards claim to include. Others may call it fakeraid or chipset raid. It may seem hardware RAID as it is directly integrated with the motherboard, but in reality, is a software RAID handled by BIOS (BIOS stores config and software RAID reads it when it loads with the OS…or something like that to that extent). You can read more on https://mangolassi.it/topic/6068/what-is-fakeraid.

Also, what are your 4x4TB drives? Hopefully it is enterprise-grade if you are to use it with production data. Also, it might help if it is over 10k RPM. Read somewhere that 7200 RPM is not good for RAID5, better use RAID10 if this will be the case.

@alefattorini, its a pleasure to extend my help or share my knowledge, its kind of give and take. I always like to ambiance here. Just lost some time to check and respond, but still reading once in a while though, specially your monthly email updates.

2 Likes

New one is coming next week :slight_smile:

1 Like

I’m going to have to put in a plug for ZFS here. It’s stable on Linux, it checksums your data to avoid silent data corruption, it rebuilds much faster than traditional RAID (since it knows which blocks are in use), it’s easy to expand volumes (though there is a right and a wrong way to do it), and it works on pretty much every major OS except Windows. Running Neth from a ZFS root is doable, but takes a bit of hacking (see Neth on ZFS root filesystem?). But using ZFS for a storage volume should be very straightforward.

3 Likes

Sorry if I am a bit slow :thinking:, but would I be setting them to an external DNS like 8.8.8.8?

@vhinzsanchez Thanks for the explanations. The 4TB drives are WD Reds, so NAS grade. Speed ultimately isn’t a concern of mine as its a home server. They have proven to be reliable for me over the years which is more important to me.

If I write about my setup, should I start a new post or include it here?

Ive never had to use, as you put it, chipset/fakeraid but am aware its not really hardware RAID. I wasnt sure if I set it at the chipset/bios AND set it in mdadm or only 1 or the other. Also wasnt sure if there may be some sort of alternative thats more like the Windows software I currently use called Drivepool. The basic gist of which is: A) pool any number of drives of various sizes/types into a single drive, B) Duplication can be set on a per folder basis, C) Drives are just NTFS, they can be pulled out and put in the pool without destroying data.

Point A is my primary objective. B would be nice but I can “settle” for losing 1/4th of my storage space to redundancy. Point C, specifically about drives going in and out without destroying data would be the cherry on top but not really required.

SSM appears to be able to handle point A, unsure about point B. Would it be something like SSM using btrfs?

Do you think it would be viable to use both BackupPC and urbackup? use urbackup on Windows clients and then backup pc for my linux client?

Either way my implementation will only be a handful of computers, likely 5 clients tops (if I build a desktop/something else in the future).

I have considered this as a possibility, for my data pool anyway. I would keep the OS on my 120GB SSD with a more standard filesystem.

My problem and understanding however is that ZFS requires alot of RAM and its ideal to use ECC RAM. Is that not the case?

I currently have “only” 8GB RAM (non-ECC) and am not ken to put any more money into this box. I was under the impression that the preference with ZFS was something like roughly 1GB RAM to 1TB storage, so I would be about 50% short in that regard if true.

If I am completely incorrect that may be the answer then.

As urbackup works seperately only between urbackup-server and urbackup-client I think there should be no problem to use both on the same machine.
It’s your choice if you want to have 2 different systems.

2 Likes

ECC is recommended for any server, or any other application where data integrity is important, but there’s nothing about ZFS that makes it uniquely more important. Similarly, more RAM is always good, but you’d likely be fine with 8 GB.

1 Like

https://en.wikipedia.org/wiki/ZFS

Under “Deduplication”

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage.

Not being entirely familiar with ZFS, maybe this only applies under circumstances that are not typical?

Id just hate to invest the time setting everything up only to find out that my choice in storage tech used wont be feasible for me. I am however, and have been, intrigued by the prospect of ZFS.