19 October 2010

ZFS Pool and FS Creation

Volume and file system management are traditionally separate affairs,
being created and maintained individually of one another.  With Sun's
ZFS, the line between the two become blurred.  As a brief understanding,
a 'zpool' can be generically considered a volume, comprised of multiple
disks and configured in a variety of RAID layouts.  A zfs file system is
then created on top of the volume, in the form of a directory structure.
Unlike traditional setups like VxVM and VxFS, upon creation of a zfs
zpool, a file system is automatically created at the pool root, which
unless otherwise specified is automatically mounted for you at /POOL_NAME,
where POOL_NAME is the name of the pool created.  Further file systems
created are, realistically, nothing more than directory structures
created within the underlying zfs pool, thus subsequently sharing the same
storage volume.  Of note, quotas can be configured on the file systems,
though quotas are not set up by default.  Also, while zfs file systems
are simply directory structures, they can be mounted and unmounted
as necessary.  The zfs pool can additionally be mounted or unmounted.
Due to a change in mindset in development of zfs, new tools were also
created to manage it.  Without setting some zfs configuration options,
traditional volume / FS utilities will not work against a zfs volume
/ FS.  The following illustrates setting up a zfs pool and filesystem,
treating the zpool as a normal volume and file systems as normal FS.
This allows the legacy handling of the FS, to be listed in /etc/vfstab,
mounted and unmounted using (u)mount, and prohibits the mounting of the
zpool.  The reasons for doing this include backwards compatibility for
human beings with an expectation of traditional volume / FS management.
The notes below use the following for point of example:

        disk file:      /tmp/disk.lst   # file containing 'volume disks'
        volume disks:   c4t3d0
                        c4t4d0
                        c4t5d0
                        c4t6d0
        RAID:           simple (concat)
        zpool:          ourpool01
        file system:    opt_local
        mount point:    /opt/local
        shell prompt:   adler [0]

To create the zpool:

        adler [0] /usr/sbin/zpool create ourpool01 `/usr/bin/cat /tmp/disk.lst`
        adler [0] /usr/sbin/zpool list
        NAME              SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
        ourpool01        2.04T     91K   2.04T     0%  ONLINE     -

* Alternately, each volume disk could be listed out in the 'zpool create':

        adler [0] /usr/sbin/zpool create ourpool01 c4t3d0 c4t4d0 c4t5d0 c4t6d0

** the disks used will be automatically formatted by zfs
        - use of single disk slices, as opposed to a full disk, is not
          advisable in zfs

Create the file system, set the zpool unmountable, and setup legacy tool use:

        adler [0] /usr/sbin/zfs create -o mountpoint=/opt/local ourpool01/opt_local
        adler [0] /usr/sbin/zfs set mountpoint=none ourpool01
        adler [0] /usr/sbin/zfs set mountpoint=legacy ourpool01/opt_local
        adler [0] /usr/sbin/zfs set canmount=off ourpool01

The above removes the pool and filesystem from zfs mount control,
allowing us to use traditional commands and files for such.  It also
automatically creates the mountpoint /opt/local and unmounts ourpool01,
which had previously been mounted at /ourpool01 during pool creation.
Now, add the following to /etc/vfstab for handling the new FS:

        ourpool01/opt_local    -   /opt/local    zfs    -    yes     -

Remember to add the above line after logically preceding mounts.
Subsequently, mount and verify the new FS:

        adler [0] /usr/sbin/mount /opt/local
        report [0] /usr/sbin/df -h /opt/local
        Filesystem             size   used  avail capacity  Mounted on
        ourpool01/opt_local    2.0T    24K   2.0T     1%    /opt/local

Should you ever want to see what actions were taken in the creation of
a zfs zpool and its FS for use in recreating, etc, use the history option:

        adler [0] /usr/sbin/zpool history ourpool01
        History for 'report02':
        2010-07-19.15:59:20 zpool create ourpool01 c4t3d0 c4t4d0 c4t5d0 c4t6d0
        2010-07-19.16:03:26 zfs create -o mountpoint=/opt/local ourpool01/opt_local
        2010-07-19.16:04:03 zfs set mountpoint=none ourpool01
        2010-07-19.16:05:36 zfs set mountpoint=legacy ourpool01/opt_local
        2010-07-19.16:06:56 zfs set canmount=off ourpool01

If for some reason you need to get rid of your zpool and its subsequent FS:

        adler [0] /usr/sbin/umount /opt/local
        adler [0] /usr/sbin/zpool destroy ourpool01

To destroy only the FS, after umount:

        adler [0] /usr/sbin/zpool destroy ourpool01/opt_local

No comments: