ZFS and other FS's as Filesystem


(Dan) #1

I don’t see how anyone who knows about ZFS would prefer a different filesystem, but maybe that’s just me.


Recommend a NAS
#2

ZFS isn’t the only one solution, there BTRFS too…

Look the Rockstar Linux project for better explaination :wink:


(Dan) #3

btrfs, by their own account (see the btrfs wiki), isn’t safe with degraded mirrors (which kind of defeats the purpose of mirrors) or parity RAID in any configuration. They’ve written their parity RAID code with a write hole. Inexplicably, parity isn’t checksummed on disk. It’s so bad that RedHat has even dropped it, after being one of its big champions for years. Its big benefit over ZFS is that, if the stars align and you recite the proper incantation (and don’t forget the correct tongue angle), you can add individual disks to expand the capacity of a parity RAID volume, which ZFS doesn’t currently allow. But that’s changing–there’s a new project with OpenZFS to do just that, largely sponsored by iXSystems.


(Rob Bosch) #4

ZFS is quite RAM hungry. In systems that lack a lot of RAM (like most consumer grade NAS appliances) ZFS is almost impossible to run, let alone smoothly.
These consumer NAS appliances are first thing home and small businesses look at.


(Dan) #5

Granted. It’s therefore not a good choice to retrofit onto a resource-starved Synology or Q-NAP, even if in many cases would be theoretically possible. But if you know enough to be considering ZFS, you probably know better than to use a Synology.


(Rob Bosch) #6

Agreed on that. I am running my proxmox server on with ZFS and love it. But then again, the server has 16GB ram and enough slots free to add another 16GB. But looking at the logs it didn’t hit max ram yet. I am running several NS instances (2 ‘production’ and 2-4 test instances) and a Debian CT with Unify controller.


(Jeroen Visser) #7

Is the proxmox server actually hosting the ZFS as well?
Doesn’t that make you a bit nervous in case of a server failure of any kind? (Asking, I didn’t dare and opted for a NAS to handle all that)


(Rob Bosch) #8

The ZFS pool is created by proxmox if that is what you mean? So yes, proxmox server is hosting ZFS.


(Jeroen Visser) #9

Interesting … so you have 1 server and a bunch of disks in it that runs the whole environment?


(Rob Bosch) #10

It’s installed like this:
I have a 250GB disk where proxmox is installed. I have a 60GB SSD that is not yet active but as soon I run into memory problems it will be used as caching disk.
The VM’s are on a ZFS pool containing 2 2TB disks. In proxmox I created this pool.


(Michael Kicks) #11

Therefore 4 SATA disks?
Port 1 250GB ProxMox (EXT4 i suppose)
Port 2 60GB SSD (future cache for ZFS)
Port 3 2TB (pool member, ZFS)
Port 4 2TB (pool member. ZFS)

(port number is just for instance, of course)
Am i right?


(Dan) #12

Why assume this? Proxmox is perfectly capable of installing on a ZFS root.


(Michael Kicks) #13

Just a question :slight_smile: I don’t know then i’m asking.


(Rob Bosch) #14

pike guessed right. proxmox is on default FS.


(Stéphane de Labrusse) #15

Proxmox creates by default its raid filesystem on zfs if you don’t have a specific hardware raid card. If you want a mdadm linux raid software you have to install debian first, then switch to proxmox after.

My first installation was over zfs…when i crashed the raid, i did reinstall with mdadm the server

Zfs is over…probably because i don’t know how to fix the raid…i’m honest


(micah roth) #16

fwiw, i’m rapidly falling in love with nethserver because of conversations like this. i love zfs under proxmox (i use zfs as my proxmox test host’s root fs, with a small ssd partititioned into slog and l2arc), and two 1tb drives in a mirror. running nethserver test instances on proxmox. go nethserver!


(Jonathan Dumont) #17

I like this article about ZFS
I was a fan of ReiserFS and move my believe to XFS;
Now I’m testing BTRfs even people say it’s easy to corrupt.


(Dan) #18

It isn’t just random people, it’s their own documentation.


(bob) #19

I use ZFS on my home NAS machines. They are XeonE3’s with 32gig ram and run Omnios (OpenSolaris Derivative). ZFS uses all the ram a machine has to provide the ARC cache (read ahead cache). You can’t expand a stripped ZFS array my adding additional disks and rebuilding the stripped set like you can with mdadm but you can add additional disks and ZFS will automatically write to those disks. That does mean you’ll loose data redundancy though. You can expand ZFS pools by changing all the disks for bigger disks and then expanding the array. Please have a look at napp-it as a web based ZFS front end, it supports linux (ubuntu) and omnios.
https://napp-it.org/
Bob

edit: some additional reading
https://www.freebsd.org/doc/handbook/zfs-zpool.html