blog.lazkani.io/content/posts/a-quick-zfs-overview-on-linux.md

8 KiB
Raw Blame History

+++ title = "A Quick ZFS Overview on Linux" author = ["Elia el Lazkani"] date = 2020-01-27 lastmod = 2020-01-27 tags = ["zfs", "file-system"] categories = ["misc"] draft = false +++

I have, for years, been interested in file systems. Specifically a file system to run my personal systems on. For most people Ext4 is good enough and that is totally fine. But, as a power user, I like to have more control, more features and more options out of my file system.

I have played with most of file sytsems on Linux, and have been using Btrfs for a few years now. I have worked with NAS systems running on ZFS and have been very impressed by it. The only problem is that ZFS wasn't been well suppored on Linux at the time. Btrfs promissed to be the ZFS replacement for Linux nativetly, especially that it was backed up by a bunch of the giants like Oracle and RedHat. My decision at that point was made, and yes that was before RedHat's support for XFS which is impressive on its own. Recently though, a new project gave everyone hope. OpenZFS came to life and so did ZFS on Linux.

Linux has had ZFS support for a while now but mostly to manage a ZFS file system, so I kept watching until I saw a blog post by Ubuntu entitled Enhancing our ZFS support on Ubuntu 19.10 -- an introduction.

In the blog post above, I read the following:

We want to support ZFS on root as an experimental installer option, initially for desktop, but keeping the layout extensible for server later on. The desktop will be the first beneficiary in Ubuntu 19.10. Note the use of the term experimental' though!

My eyes widened at this point. I know that Ubuntu has had native ZFS support since 2016 but now I could install it with one click. At that point I was all in, and I went back to Ubuntu.

Ubuntu on root ZFS

You heard me right, the Ubuntu installer offers an 'experimental' install on ZFS. I made the decision based on the well tested stability of ZFS in production environments and its ability to offer me the flexibility and the ability to backup and recover my data easily. In other words, if Ubuntu doesn't work, ZFS is there and I can install whatever I like on top and if you are familiar with ZFS you know exactly what I mean and I have barely scratched the ice on its capabilities.

So here I was with Ubuntu installed on my laptop on root ZFS. So I had to do it.

 # zpool status -v
   pool: bpool
  state: ONLINE
 status: The pool is formatted using a legacy on-disk format.  The pool can
   still be used, but some features are unavailable.
 action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
   pool will no longer be accessible on software that does not support
   feature flags.
   scan: none requested
 config:

   NAME         STATE     READ WRITE CKSUM
   bpool        ONLINE       0     0     0
     nvme0n1p4  ONLINE       0     0     0

 errors: No known data errors

   pool: rpool
  state: ONLINE
   scan: none requested
 config:

   NAME         STATE     READ WRITE CKSUM
   rpool        ONLINE       0     0     0
     nvme0n1p5  ONLINE       0     0     0

 errors: No known data errors

Note

I have read somewhere in a blog about Ubuntu that I should not run an upgrade on the boot pool.

and it's running on...

 # uname -s -v -i -o
 Linux #28-Ubuntu SMP Wed Dec 18 05:37:46 UTC 2019 x86_64 GNU/Linux

Well that was pretty easy.

ZFS Pools

Let's take a look at how the installer has configured the pools.

 # zpool list
 NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
 bpool  1,88G   158M  1,72G        -         -      -     8%  1.00x    ONLINE  -
 rpool   472G  7,91G   464G        -         -     0%     1%  1.00x    ONLINE  -

So it creates a boot pool and a root pool. Maybe looking at the datasets would give us a better idea.

ZFS Datasets

Let's look at the sanitized version of the datasets.

 # zfs list
 NAME                                               USED  AVAIL     REFER  MOUNTPOINT
 bpool                                              158M  1,60G      176K  /boot
 bpool/BOOT                                         157M  1,60G      176K  none
 bpool/BOOT/ubuntu_xxxxxx                           157M  1,60G      157M  /boot
 rpool                                             7,92G   449G       96K  /
 rpool/ROOT                                        4,53G   449G       96K  none
 rpool/ROOT/ubuntu_xxxxxx                          4,53G   449G     3,37G  /
 rpool/ROOT/ubuntu_xxxxxx/srv                        96K   449G       96K  /srv
 rpool/ROOT/ubuntu_xxxxxx/usr                       208K   449G       96K  /usr
 rpool/ROOT/ubuntu_xxxxxx/usr/local                 112K   449G      112K  /usr/local
 rpool/ROOT/ubuntu_xxxxxx/var                      1,16G   449G       96K  /var
 rpool/ROOT/ubuntu_xxxxxx/var/games                  96K   449G       96K  /var/games
 rpool/ROOT/ubuntu_xxxxxx/var/lib                  1,15G   449G     1,04G  /var/lib
 rpool/ROOT/ubuntu_xxxxxx/var/lib/AccountServices    96K   449G       96K  /var/lib/AccountServices
 rpool/ROOT/ubuntu_xxxxxx/var/lib/NetworkManager    152K   449G      152K  /var/lib/NetworkManager
 rpool/ROOT/ubuntu_xxxxxx/var/lib/apt              75,2M   449G     75,2M  /var/lib/apt
 rpool/ROOT/ubuntu_xxxxxx/var/lib/dpkg             36,5M   449G     36,5M  /var/lib/dpkg
 rpool/ROOT/ubuntu_xxxxxx/var/log                  11,0M   449G     11,0M  /var/log
 rpool/ROOT/ubuntu_xxxxxx/var/mail                   96K   449G       96K  /var/mail
 rpool/ROOT/ubuntu_xxxxxx/var/snap                  128K   449G      128K  /var/snap
 rpool/ROOT/ubuntu_xxxxxx/var/spool                 112K   449G      112K  /var/spool
 rpool/ROOT/ubuntu_xxxxxx/var/www                    96K   449G       96K  /var/www
 rpool/USERDATA                                    3,38G   449G       96K  /
 rpool/USERDATA/user_yyyyyy                        3,37G   449G     3,37G  /home/user
 rpool/USERDATA/root_yyyyyy                        7,52M   449G     7,52M  /root

Note

The installer have created some random IDs that I have not figured out if they are totally random or mapped to something so I have sanitized them. I also sanitized the user, of course. ;)

It looks like the installer created a bunch of datasets with their respective mountpoints.

ZFS Properties

ZFS has a list of features and they are tunable in different ways, one of them is through the properties, let's have a look.

 # zfs get all rpool
 NAME   PROPERTY              VALUE                 SOURCE
 rpool  type                  filesystem            -
 rpool  creation              vr jan 24 23:04 2020  -
 rpool  used                  7,91G                 -
 rpool  available             449G                  -
 rpool  referenced            96K                   -
 rpool  compressratio         1.43x                 -
 rpool  mounted               no                    -
 rpool  quota                 none                  default
 rpool  reservation           none                  default
 rpool  recordsize            128K                  default
 rpool  mountpoint            /                     local
 ...

This gives us an idea on properties set on the dataset specified, in this case, the rpool root dataset.

Conclusion

I read in a blog post that the Ubuntu team responsible for the ZFS support has followed all the ZFS best practices in the installer. I have no way of verifying that as I am not a ZFS expert but I'll be happy to take their word for it until I learn more. What is certain for now is that I am running on ZFS, and I will be enjoying its features to the fullest.