Back

Installing FreeBSD 9.0 on ZFS

There are quite a few guides around the Internet on how to install FreeBSD onto a pure ZFS system. I don't claim that this recipe is anything revolutionary, but it does cover some new material specifically to do with the changes necessary for installing using the FreeBSD 9.0 (or higher) Release media. With version 9.0 FreeBSD made the long delayed jump away from the venerable sysinstall to the new bsdinstall. Neither of those two installers is capable of setting up a ZFS system from scratch, so you will have to go through a manual setup procedure. This describes how to do the job using the 9.0 install media.

References

This recipe is heavily based on the following articles, with some original variations:

Preparatory Bits

This procedure was developed using a FreeBSD64 VM under VirtualBox 4.1.8 on a MacOS X laptop. However, there is nothing system specific here, and the same routine should work on just about any system — either virtual or physical.

Boot from 9.0-RELEASE install media. When prompted, select "Live CD" rather than system installer or shell. Login as 'root' — there's no root password on the livecd image.

One great advantage of the new style installed is that the LiveCD just uses the standard layout for a FreeBSD system. One slight disadvantage is that (depending on what sort of install media you are using) this will be read-only. However there are read-write MFSes mounted as /tmp and /var.

Disk Partitioning

I'm only using a single drive, which appears on my VM as da0. Whatever configuration you intend to build, you should apply the same partitioning layout to all the drives in your zpool. The other configuration choice to make here is how much swap space to allocate. I've chosen a raw partiton for swap for two reasons: you need a raw partition in order for crash dumps to work and ZFS can deadlock under memory pressure if you create swap areas under ZFS.

# gpart create -s gpt da0
# gpart add -s 64k -t freebsd-boot da0
# gpart add -s 2G -t freebsd-swap -l swap0 da0
# gpart add -t freebsd-zfs -l disk0 da0

Mark all the disks as bootable:

# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

Create the Zpool

As this is a livecd environment, the root directory is read-only, so we can't mount ZFS partitions there. Neither can we create /boot/zfs/zpool.cache. Therefore, use the altroot capability of ZFS (via the -R flag to zpool) to rebase everything to beneath /tmp/zroot. zroot is the arbitrarily chosen name of the zpool we're going to create: you can substitute anything you fancy. We will just do without a zpool.cache initially — it only becomes important once we want to reboot.

# zpool create -R /tmp/zroot zroot /dev/gpt/disk0

Modify this command as appropriate to specify whatever vdev layout you prefer.

Create the Top-Level ZFS and Set Default Options

This is where we introduce the special layout required to use boot environments. The idea is that we can clone the root filesystem, upgrade it off-line and then swap it for previous one.

# zfs set checksum=fletcher4 zroot
# zfs set canmount=off zroot
# zfs create -o canmount=off zroot/ROOT
# zfs create -o mountpoint=/ zroot/ROOT/9.0-RELEASE
# zfs set freebsd:boot-environment=1 zroot/ROOT/9.0-RELEASE
# zpool set bootfs=zroot/ROOT/9.0-RELEASE zroot

Now create /boot/zfs and change the properties of the zpool to use that (ie. the default location) for zpool.cache. This doesn't automatically create zpool.cache yet: we need to export and re-import the pool to cause that to happen, but that can wait for the time being.

# cd /tmp/zroot
# mkdir -p boot/zfs
# zpool set cachefile='' zroot

Create Standard Filesystem Layout

/tmp, /var and /home are not part of the boot environment, and will overlay the bootfs mounted at / while inheriting properties etc. from the top-level zroot. Create a dummy zroot/usr ZFS using the canmount=off property to be a parent of /usr/ports and /usr/local.

# cd /
# zfs create -o compression=on -o exec=on -o setuid=off zroot/tmp
# chmod 1777 /tmp/zroot/tmp
# zfs create zroot/var
# zfs create zroot/home
# zfs create -o canmount=off zroot/usr

Subdirectories of /usr

Much of what lies under /usr is part of the boot environment, but not all.

# zfs create -o compression=lzjb             -o setuid=off zroot/usr/ports
# zfs create -o compression=off  -o exec=off -o setuid=off zroot/usr/ports/distfiles
# zfs create -o compression=off  -o exec=off -o setuid=off zroot/usr/ports/packages
# zfs create                                               zroot/usr/local

For my purposes, I'm going to treat /usr/src and /usr/obj as belonging to the boot evironment. Keep these as separate ZFSes so they can be independently cloned and snapshotted. Note the use of canmount=off to allow two different ZFS to occupy conceptually the same space in the filesystem hierarchy.

# zfs create -o canmount=off zroot/ROOT/9.0-RELEASE/usr 
# zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/ROOT/9.0-RELEASE/usr/src
# zfs create zroot/ROOT/9.0-RELEASE/usr/obj

Create Link to /home

The policy of creating home directories under /usr/home dates back to the days when disks were much smaller, and doesn't really make sense nowadays. I prefer to have an actual /home filesystem, and make a compatibility link from /usr, which is the reverse of the default layout produced by the installer.

# cd /tmp/zroot/usr
# ln -s ../home home
# cd /

Subdirectories of /var

# zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
# zfs create                     -o exec=off -o setuid=off zroot/var/db
# zfs create -o compression=lzjb -o exec=on  -o setuid=off zroot/var/db/pkg
# zfs create                     -o exec=off -o setuid=off zroot/var/empty
# zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
# zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
# zfs create                     -o exec=off -o setuid=off zroot/var/run
# zfs create -o compression=lzjb -o exec=on  -o setuid=off zroot/var/tmp
# chmod 1777 /tmp/zroot/var/tmp

Unpack FreeBSD Distribution Sets into zroot

This is much simpler than with the old sysinstall:

# cd /tmp/zroot
# tar -xvppf /usr/freebsd-dist/base.txz
# tar -xvppf /usr/freebsd-dist/lib32.txz
# tar -xvppf /usr/freebsd-dist/kernel.txz

You can unpack any of the other install sets in the same way if you wish. The list as given is the minimum required for a usable 64-bit system with 32-bit compatibility. If you're installing a pure 32-bit system, omit lib32.txz. (It won't be available on the install media.)

Make /var/empty readonly

# zfs set readonly=on zroot/var/empty

Post-install Configuration

These are just the standard actions required so that the system will come up cleanly. You can do any of the other usual things needed on a brand-new installation at this point, but I'd recommend waiting untill the system has been rebooted and is running from the new ZFS.

Edit vital configuration files

Necessary so that the system will come up cleanly

# cd /tmp/zroot/boot
# vi loader.conf
# cd ../etc
# vi rc.conf
# vi fstab

Make the contents of those files as follows:

/boot/loader.conf:

vfs.root.mountfrom="zfs:zroot/ROOT/9.0-RELEASE"
zfs_load="YES"
opensolaris_load="YES"

/etc/rc.conf:

hostname="beastie.example.com"
zfs_enable="YES"
ifconfig_em0="DHCP"

Substitute whatever hostname you want, and whatever the appropriate network interface configuration should be.

/etc/fstab:

# Device        Mountpoint      FStype  Options Dump    Pass#
/dev/gpt/swap0  none            swap    sw      0       0

This fstab assumes you're installing onto a single disk, as I am in this virtual system. For a system with resilience to disk failure, swap areas should be mirrored. No matter what sort of zvol you're using for your zpool, I recommend simply taking pairs of swap partitions and mirroring them with gmirror. If you have more than one pair of disks, you can create several gmirrors and have multiple swap areas on-line at once, which will achieve an effect equivalent to striping them together. If you have an odd number of drives, then simply don't put swap on one of them.

To create a mirrored swap area:

Final touches

Chroot into the new system for some final setup actions:

# chroot /tmp/zroot
# passwd root
# tzsetup
# cd /etc/mail
# make aliases
# exit

Create zpool.cache and Setup as a Boot Environment

This is where we do the export and import mentioned above, in order to initialise zpool.cache, then reboot, removing the installation media. Note that the standard altroot option (-R /tmp/zroot) implicitly turns off creating a cache file on import. So we explicitly override that, and create a zpool.cache in /tmp — it doesn't matter where the cache file is created, since we can copy it to the right location afterwards.

# cd /
# zpool export zroot
# zpool import -o altroot=/tmp/zroot -o cachefile=/tmp/zpool.cache zroot
# cd /tmp/zroot/boot/zfs
# cp /tmp/zpool.cache .
# cd /

Now we do something a little odd: we change the mountpoint for the root ZFS. The reason for doing this is to make switching between different boot environments easy. The kernel will mount the ZFS marked as the active root filesystem at / irrespective of what the ZFS mountpoint property says. All other boot environments are mounted according to their inherited mountpoint properties, meaning they are tidied away as separate subdirectories of /ROOT, rather than all attempting to overlay each other on the root. When it comes time to switch to a different boot environment, for instance during a system upgrade, it suffices simply to set the new boot environment as the active one, and the mount points for all the other boot environments will sort themselves out automatically when the system is rebooted.

# zfs umount -a
# zfs inherit mountpoint zroot/ROOT/9.0-RELEASE
# zfs set mountpoint=/usr zroot/ROOT/9.0-RELEASE/usr

Finally, reboot into the newly installed system. Remember to remove the installation media.

# shutdown -r now

...and we're done

Et voilà one freshly installed FreeBSD 9.0 on ZFS. Enjoy!