Applies to:
Solaris SPARC Operating System - Version 10 3/05 and laterSolaris x64/x86 Operating System - Version 10 3/05 and later
All Platforms
Goal
How to backup and restore Solaris ZFS root pool
The following procedure can be used to backup and restore a ZFS root pool (rpool) using the tools that are provided in Solaris 10 and above. It is advised that the reader becomes comfortable with this procedure and attempts a restore before deploying this into production environments. YOu can find in the ZFS admin guide the official procedures to do it, for solaris 10, solaris 11 and solaris 11.1 (see note below to the correct links)
Solution
Pre-Requisites
-
Advisory patches
At the time of writing, the reader is advised to install Kernel Feature Patch (SPARC) or (x86) to address the following issue:
CR #6794452
Synopsis: zfs receive cannot restore rpool
Without this patch it may not be possible to restore a recursively sent root pool, ie: streams created with the -R switch to zfs send. -
Recovery Media
You should use at least the same release of Solaris 10 that you are trying to restore from, ie: if the root pool backup streams were from Solaris 10 10/08 then you must boot from at least a Solaris 10 10/08 media in order to perform the restore. This is because ZFS pools and filesystems have version numbers that can be upgraded by updating to a more recent release of Solaris 10 or by installing the kernel feature patch associated with that release, eg:
Solaris 10 10/08 (SPARC) ships with kernel feature patch which allows zpool version 10 functionality.
In order to understand a ZFS pool at version 10, the system must boot from a kernel at 137137-09 or later.
As such booting from a Solaris 10 5/08 DVD would not have sufficient knowledge to restore a version 10 pool.
Definitions
In this procedure, it is assumed the root pool will be called 'rpool' as is the given standard during installation.
It will also refer to a simple number of filesystems:
rpool
rpool/ROOT
rpool/ROOT/s10u7
rpool/export
rpool/export/home
This may need to be adjusted depending upon the filesystems that were created as part of the Solaris installation. Furthermore it does not take into account any Live Upgrade created boot environments or cloned filesystems. The decision has been made to send each filesystem individually, where possible in case just one filesystem needs to be restored from a stream file.
Backing up a Solaris ZFS root pool
Take a copy of the properties that are set in the rpool and all filesystems, plus volumes that are associated with it:
# zfs get all rpool
# zfs get all rpool/ROOT
# zfs get all rpool/ROOT/s10u7
# zfs get all rpool/export
# zfs get all rpool/export/home
# zfs get all rpool/dump
# zfs get all rpool/swap
(repeat for all ZFS filesystems)
Save this data for reference in case it is required later on.
Now snapshot the rpool with a suitable snapshot name:
This will create a recursive snapshot of all descendants, including rpool/export, rpool/export/home as well as rpool/dump and rpool/swap (volumes).
# zfs snapshot -r rpool@backup
is executed and the system is actively using the swap, it is poosible rpool might run out of space before the user is able to delete rpool/swap@backup.
In some situations, one may see:
# zfs snapshot -r rpool@today
cannot create snapshot 'rpool/swap@today': out of space
no snapshots were created
swap and dump are not required to be included in the backup, so they should be destroyed thus:
# zfs destroy rpool/dump@backup
Then for each filesystem, send the data to a backup file/location. Make sure that there is sufficient capacity in your backup location as "zfs send" does not understand when the destination becomes full (eg: a multi-volume tape). In this example /backup is an NFS mounted filesystem from a suitably capacious server:
# zfs send -v rpool/ROOT@backup > /backup/rpool.ROOT.dump
# zfs send -vR rpool/ROOT/s10u7@backup > /backup/rpool.ROOT.s10u7.dump
# zfs send -v rpool/export@backup > /backup/rpool.export.dump
# zfs send -v rpool/export/home@backup > /backup/rpool.export.home.dump
These dump files can then be archived onto non-volatile storage for safe keeping, eg: magnetic tape.
Restoring a Solaris ZFS root pool
If it is necessary to rebuild/restore a root pool, locate the known good copies of the zfs streams that were created in the Backing Up section of this document and make sure these are readily available. In order to restore the root pool, first boot from a Solaris 10 DVD or network (jumpstart) into single user mode. Depending upon whether the booted root filesystem is writable, it may be necessary to tell ZFS to use a temporary location for the mountpoint.
Pre-Requisites
A root pool at the time of writing must:
- Live on a disk with an SMI disk label
-
Be composed of a single slice, not an entire disk
(USE the cXtYdZs0 syntax and NOT cXtYdZ which would use the entire disk and EFI label) -
Ensure that the pool is created with the same version as the original rpool.
You can find the pool version from the zpool upgrade version output.
There is also a matrix for ZFS pool and filesystem versions and the Oracle Solaris releases that support them:
ZFS Filesystem and Zpool Version Matrix [ID 1359758.1].
Use -o version=in the zpool create command.
In this example, disk c3t1d0s0 contains an SMI label, where slice 0 is using the entire capacity of the disk. Change the controller and target numbers accordingly.
Ensure the dump files are available for reading. If these exist on tape, then a possible location would be /dev/rmt/0n, however in this example the dump files are made available by mounting up the backup filesystem from an NFS server.
Once the dump files are available, restore the filesystems that make up the root pool. If Kernel Feature Patch 139555/139556-08 is installed you can use the flags '-Fdu' to ensure that the filesystems are not mounted. It is important to restore these in the correct hierarchical order:
# zfs receive -Fd rpool < /backup/rpool.ROOT.dump
# zfs receive -Fd rpool < /backup/rpool.ROOT.s10u7.dump
# zfs receive -Fd rpool < /backup/rpool.export.dump
# zfs receive -Fd rpool < /backup/rpool.export.home.dump
This will restore the filesystems, but remember an rpool will also have a dump and swap device, made up from zvol's. See the manual to create the swap and dump:
Solaris 10:
Solaris 11:
Solaris 11.1:
(adjust the size of the dump and swap volumes according to the configuration of the system being restored).
Now to make the disk ZFS bootable, a boot block must be installed in SPARC, and the correct GRUB in x86, so pick one of the following according to the platform being restored:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t1d0s0 (x86)
Once the boot block has been installed, it is then necessary to set the bootable dataset/filesystem within this rpool.
To do this run the following zpool commands (Makes the rpool/ROOT/s10u7 the bootable dataset):
Set the failmode property of the rpool to continue. This differs to "data" zfs pools which use wait by default, so it's important to set this correctly:
Check to see if canmount is set to noauto, if not set:
# Check that canmount is set to noauto and if not ...
# zfs set canmount=noauto rpool/ROOT/s10u7
# ... set it appropriately
Temporarily disable the canmount property for the following filesystems, to prevent these from mounting when it comes to setting the mountpoint property:
# zfs set canmount=noauto rpool/export
Set the mountpoint properties for the various filesystems:
# zfs set mountpoint=/rpool rpool
Set the dump device correctly
Please see the official documentation about how to backup or restore a ZFS rpool in:
Solaris 10:
Solaris 11:
Solaris 11.1:
Note: in solaris 11.1 is introduce GRUB 2 so it is different the way to install, bootadm command must be used to install GRUB 2 see the manual for solaris 11.1 to see how to do it.