I don’t see how anyone who knows about ZFS would prefer a different filesystem, but maybe that’s just me.
ZFS isn’t the only one solution, there BTRFS too…
Look the Rockstar Linux project for better explaination
btrfs, by their own account (see the btrfs wiki), isn’t safe with degraded mirrors (which kind of defeats the purpose of mirrors) or parity RAID in any configuration. They’ve written their parity RAID code with a write hole. Inexplicably, parity isn’t checksummed on disk. It’s so bad that RedHat has even dropped it, after being one of its big champions for years. Its big benefit over ZFS is that, if the stars align and you recite the proper incantation (and don’t forget the correct tongue angle), you can add individual disks to expand the capacity of a parity RAID volume, which ZFS doesn’t currently allow. But that’s changing–there’s a new project with OpenZFS to do just that, largely sponsored by iXSystems.
ZFS is quite RAM hungry. In systems that lack a lot of RAM (like most consumer grade NAS appliances) ZFS is almost impossible to run, let alone smoothly.
These consumer NAS appliances are first thing home and small businesses look at.
Granted. It’s therefore not a good choice to retrofit onto a resource-starved Synology or Q-NAP, even if in many cases would be theoretically possible. But if you know enough to be considering ZFS, you probably know better than to use a Synology.
Agreed on that. I am running my proxmox server
on with ZFS and love it. But then again, the server has 16GB ram and enough slots free to add another 16GB. But looking at the logs it didn’t hit max ram yet. I am running several NS instances (2 ‘production’ and 2-4 test instances) and a Debian CT with Unify controller.
Is the proxmox server actually hosting the ZFS as well?
Doesn’t that make you a bit nervous in case of a server failure of any kind? (Asking, I didn’t dare and opted for a NAS to handle all that)
The ZFS pool is created by proxmox if that is what you mean? So yes, proxmox server is hosting ZFS.
Interesting … so you have 1 server and a bunch of disks in it that runs the whole environment?
It’s installed like this:
I have a 250GB disk where proxmox is installed. I have a 60GB SSD that is not yet active but as soon I run into memory problems it will be used as caching disk.
The VM’s are on a ZFS pool containing 2 2TB disks. In proxmox I created this pool.
Therefore 4 SATA disks?
Port 1 250GB ProxMox (EXT4 i suppose)
Port 2 60GB SSD (future cache for ZFS)
Port 3 2TB (pool member, ZFS)
Port 4 2TB (pool member. ZFS)
(port number is just for instance, of course)
Am i right?
Why assume this? Proxmox is perfectly capable of installing on a ZFS root.
Just a question I don’t know then i’m asking.
pike guessed right. proxmox is on default FS.
Proxmox creates by default its raid filesystem on zfs if you don’t have a specific hardware raid card. If you want a mdadm linux raid software you have to install debian first, then switch to proxmox after.
My first installation was over zfs…when i crashed the raid, i did reinstall with mdadm the server
Zfs is over…probably because i don’t know how to fix the raid…i’m honest
fwiw, i’m rapidly falling in love with nethserver because of conversations like this. i love zfs under proxmox (i use zfs as my proxmox test host’s root fs, with a small ssd partititioned into slog and l2arc), and two 1tb drives in a mirror. running nethserver test instances on proxmox. go nethserver!
I like this article about ZFS
I was a fan of ReiserFS and move my believe to XFS;
Now I’m testing BTRfs even people say it’s easy to corrupt.
It isn’t just random people, it’s their own documentation.
I use ZFS on my home NAS machines. They are XeonE3’s with 32gig ram and run Omnios (OpenSolaris Derivative). ZFS uses all the ram a machine has to provide the ARC cache (read ahead cache). You can’t expand a stripped ZFS array my adding additional disks and rebuilding the stripped set like you can with mdadm but you can add additional disks and ZFS will automatically write to those disks. That does mean you’ll loose data redundancy though. You can expand ZFS pools by changing all the disks for bigger disks and then expanding the array. Please have a look at napp-it as a web based ZFS front end, it supports linux (ubuntu) and omnios.
edit: some additional reading
I would like to withdraw this above, indeed my proxmox is now running under a LVM linux raid, but now I miss something important with Proxmox: The replication
With two proxmox instances (two nodes) you could replicate the virtual machine to the remote node, at least each 15 minutes, but this is only workable with zfs storage.