Since the storage disks are separate from the FreeNAS® operating system, you do not actually have a NAS (network-attached storage) system until you configure your disks into at least one volume. The FreeNAS® graphical interface supports the creation of both UFS and ZFS volumes. ZFS volumes are recommended to get the most out of your FreeNAS® system.
NOTE: in ZFS terminology, the storage that is managed by ZFS is referred to as a pool. When configuring the ZFS pool using the FreeNAS® graphical interface, the term volume is used to refer to either a UFS volume or a ZFS pool.
Proper storage design is important for any NAS. It is recommended that you read through this entire chapter first, before configuring your storage disks, so that you are aware of all of the possible features, know which ones will benefit your setup most, and are aware of any caveats or hardware restrictions. If you are new to RAID concepts or would like an overview of the differences between hardware RAID and ZFS RAIDZ*, skim through the section on Hardware Recommendations as well.
Auto Importing Volumes
If you click Storage → Volumes → Auto Import Volume, you can configure FreeNAS® to use an existing software UFS or ZFS RAID volume. This action is typically performed when an existing FreeNAS® system is re-installed (rather than upgraded). Since the operating system is separate from the disks, a new installation does not affect the data on the disks; however, the new operating system needs to be configured to use the existing volume.
Supported volumes are UFS stripes (RAID0), UFS mirrors (RAID1), UFS RAID3, as well as existing ZFS pools. UFS RAID5 is not supported as it is an unmaintained summer of code project which was never integrated into FreeBSD.
Beginning with version 8.3.1, the import of existing GELI-encrypted ZFS pools is also supported. However, the pool must be decrypted before it can be imported.
Figure 6.3a shows the initial pop-up window that appears when you select to auto import a volume.
Figure 6.3a: Initial Auto Import Volume Screen
If you are importing a UFS RAID or an existing, unencrypted ZFS pool, select "No: Skip to import" to access the screen shown in Figure 6.3b.
Figure 6.3b: Auto Importing a Non-Encrypted Volume
Existing software RAID volumes should be available for selection from the drop-down menu. In the example shown in Figure 6.3b, the FreeNAS® system has detected an existing, unencrypted ZFS pool. Once the volume is selected, click the OK button to import the volume.
NOTE: FreeNAS® will not import a dirty volume. If an existing UFS RAID does not show in the drop-down menu, you will need to fsck the volume. If an existing ZFS pool does not show in the drop-down menu, run zpool import from Shell to import the pool. If you suspect that your hardware is not being detected, run camcontrol devlist from Shell. If the disk does not appear in the output, check to see if the controller driver is supported or if it needs to be loaded by creating a tunable.
22.214.171.124 Auto Importing a GELI-Encrypted ZFS Pool
If you are importing an existing GELI-encrypted ZFS pool, you must decrypt the disks before importing the pool. In Figure 6.3a, select “Yes:Decrypt disks” to access the screen shown in Figure 6.3c.
Figure 6.3c: Decrypting the Disks Before Importing the ZFS Pool
Select the disks in the pool, browse to the location of the saved encryption key, input the passphrase associated with the key, then click OK to decrypt the disks.
NOTE: the encryption key is required to decrypt the pool. If the pool can not be decrypted, it can not be re-imported after a failed upgrade or lost configuration. This means that it is very important to save a copy of the key and to remember the passphrase that was configured for the key. The View Volumes screen is used to manage the keys for encrypted volumes.
Once the pool is decrypted, it should appear in the drop-down menu of Figure 6.3b. Click the OK button to finish the volume import.
The Volume → Import Volume screen, shown in Figure 6.3d, is used to import a single disk or partition that has been formatted with a supported filesystem. FreeNAS® supports the import of disks that have been formatted with UFS, NTFS, MSDOS, or EXT2.
Figure 6.3d: Importing a Volume
Input a name for the volume, use the drop-down menu to select the disk or partition that you wish to import, and select the type of filesystem on the disk.
Before importing a disk, be aware of the following caveats:
- FreeNAS® will not import a dirty filesystem. If a supported filesystem does not show in the drop-down menu, you will need to fsck or run a disk check on the filesystem.
- earlier versions of FreeNAS® 8 had a bug that prevented the successful import of NTFS drives. Don't try to import NTFS if you are running a version earlier than FreeNAS® 8.0.1-RC1.
- FreeNAS® can not import dynamic NTFS volumes at this time. A future version of FreeBSD may address this issue.
- if an NTFS volume will not import, try ejecting the volume safely from a Windows system. This will fix some journal files that are required to mount the drive.
If you have unformatted disks or wish to overwrite the filesystem (and data) on your disks, use the Volume Manager to format the desired disks as a UFS volume or a ZFS pool.
If you click on Storage → Volumes → Volume Manager, you will see a screen similar to the example shown in Figure 6.3e. The options which are displayed will vary depending upon the amount of available disks and which filesystem is selected.
Figure 6.3e: Creating a ZFS Pool Using Volume Manager
Table 6.3a summarizes the configuration options of this screen. The rest of this section describes these features in more detail. It is recommended that you read this entire section first to understand the options which are available before configuring the disks that will be made available to FreeNAS®.
Table 6.3a: Options When Creating a Volume
|Volume name||string||ZFS volumes must conform to these naming conventions; it is recommended to choose a name that will stick out in the logs (e.g. not data or freenas)|
|Member disks||list||highlight desired number of disks from list of available disks|
|Filesystem type||bullet||select either UFS or ZFS|
|Specify custom path||checkbox||only available when select UFS; useful for creating a /var for persistent log storage|
|Path||string||only available when Specify custom path is checked; must be full name of volume (e.g. /mnt/var) and if no path is provided, it will append the Volume name to /mnt|
|Force 4096 bytes sector size||checkbox||FreeNAS® always uses 4K sectors for UFS and uses 4K sectors for ZFS if the underlying hard drive is detected as being advanced format; note that the auto-detector is not bullet proof so you should refer to the disk manual to determine if the disk supports 4k; checking this option forces 4K which is useful in a RAIDZ that contains a mix of older and advanced format drives; note that you can not change this setting once the volume/pool is created unless you destroy the volume/pool and recreate it (which deletes the data on the volume/pool)|
|Enable full disk encryption||checkbox||requires Force 4096 bytes sector size which will be forcibly auto-selected if this box is checked; read the section on Encryption before choosing to use encryption|
|Initialize with random data||checkbox||only appears if Enable full disk encryption is checked; recommended as it writes the disks with random data before enabling encryption, however it will take a longer time to create the volume|
|Deduplication||drop-down menu||choices are Off, Verify, and On; carefully consider the section on Deduplication before changing this setting|
|Group type||bullet||options vary by filesystem type and number of selected disks; may include mirror, stripe, RAID3, RAIDZ1, RAIDZ2, RAIDZ3|
|ZFS extra||bullet||only available when select ZFS; choices are None, Log, Cache, Spare; see ZFS Extra for descriptions of each option|
To configure which disks will be available as storage, use the mouse to select the disk(s) to be used. To select multiple disks, highlight the first disk, then hold the shift key as you highlight the last disk.
NOTE: it is not recommended to create a UFS volume larger than 5TB as it will be inefficient to fsck.
The Add Volume button warns that creating a volume destroys all existing data on selected disk(s) . In other words, creating storage using Volume Manager is a destructive action that reformats the selected disks. If your intent is to not overwrite the data on an existing volume, see if the volume format is supported by the auto-import or import actions. If so, perform the supported action instead. If the current storage format is not supported, you will need to backup the data to an external media, format the disks, then restore the data to the new volume.
How the volume is formatted is determined by your selection in the "Group type" section. The available options differ depending upon the selected "Filesystem type" and the number of highlighted disks:
- if you select one disk, you can choose to format with UFS or ZFS
- if you select two disks, you can create a UFS or ZFS mirror or stripe
- if you select three disks, you can create a UFS or ZFS stripe, a UFS RAID3, or a ZFS mirror or RAIDZ1
- if you select four disks, you can create a UFS or ZFS mirror or stripe, or a ZFS RAIDZ1 or RAIDZ2
- if you select five disks, you can create a UFS or ZFS stripe, a UFS RAID3, or a ZFS mirror, RAIDZ1, RAIDZ2, or RAIDZ3
If you have more than five disks and are using ZFS, consider the size of your disk groups for best performance and scalability. An overview of the various RAID levels and recommended disk group sizes can be found in the RAID Overview section.
Depending upon the size and number of disks, the type of controller, the group type, and the type of filesystem, creating the volume may take a few minutes. Since UFS volume creation will format the disks, it may take as long as 10 or 15 minutes for a number of large disks. Once the volume is created, the screen will refresh and the new volume will be listed under Storage → Volumes.
The ZFS extra options can be used to create a log, cache, or spare device. When creating a ZFS volume, if you leave some disks/SSDs unchecked, they will appear as available within the ZFS extra section. The following options are available:
None: disk(s) are still available to be selected for formatting.
Log: selected disk will be dedicated for storing the ZIL (ZFS Intent Log). See the Separate Log Devices section of the ZFS Best Practices Guide for size recommendations. When two or more log devices are specified, FreeNAS® will mirror them. This is a prevention measure because losing the ZIL on a ZFSv15 pool could lead to disastrous results such as making the entire pool inaccessible. On a ZFSv28 pool, losing the ZIL can still cause the loss of in-flight writes.
Putting the ZIL on high speed devices can improve performance for certain workloads, especially those requiring synchronous writes such as NFS clients connecting to FreeNAS® running on VMWare ESXi. In such cases, a dedicated ZIL will make a big difference in performance. Applications that do not do a lot of synchronous writes are less likely to benefit from having dedicated ZIL devices. For VMWare, if a high speed ZIL device is not an option, using iSCSI instead of NFS is a workaround to achieve better performance.
Cache: selected device, typically an SSD, will be dedicated to L2ARC on-disk cache. See the Separate Cache Devices section of the ZFS Best Practices Guide for size recommendations. Losing an L2ARC device will not affect the integrity of the storage pool, but may have an impact on read performance, depending upon the workload and the ratio of dataset size to cache size.
Spare: will create a hot spare that is only used when another disk fails. Hot spares speed up healing in the face of hardware failures and are critical for high mean time to data loss (MTTDL) environments. One or two spares for a 40-disk pool is a commonly used configuration. Use this option with caution as there is a known bug in the current FreeBSD implementation. This will be fixed by zfsd which will be implemented once it is committed to FreeBSD.
The deduplication option warns that enabling dedup may have drastic performance implications and that compression should be used instead. Before checking the deduplication box, read the section on deduplication in the ZFS Overview first. This article provides a good description of the value v.s. cost considerations for deduplication.
Unless you have a lot of RAM and a lot of duplicate data, do not change the default deduplication setting of "Off". The dedup tables used during deduplication need ~8 GB of RAM per 1TB of data to be deduplicated. For performance reasons, consider using dataset compression rather than turning this option on. If you really do have a lot of RAM and a lot of duplicate data, consider creating a dataset for the duplicate data and enabling deduplication on that dataset instead.
If deduplication is changed to On, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared among files. If deduplication is changed to Verify, ZFS will do a byte-to-byte comparison when two blocks have the same signature to make sure that the block contents are identical. Since hash collisions are extremely rare, verify is usually not worth the performance hit.
Beginning with 8.3.1, FreeNAS® supports GELI full disk encryption when creating ZFS volumes. It is important to understand the following when considering whether or not encryption is right for your FreeNAS® system:
- This is not the encryption method used by Oracle ZFSv30. That version of ZFS has not been open sourced and is the property of Oracle.
- This is full disk encryption and not per-filesystem encryption. The underlying drives are first encrypted, then the pool is created on top of the encrypted devices.
- This type of encryption is primarily targeted at users who store sensitive data and want to retain the ability to remove disks from the pool without having to first wipe the disk's contents.
- This design is only suitable for safe disposal of disks independent of the encryption key. As long as the key and the disks are intact, the system is vulnerable to being decrypted. The key should be protected by a strong passphrase and any backups of the key should be securely stored.
- On the other hand, if the key is lost, the data on the disks is inaccessible. Always backup the key!
- The encryption key is per ZFS volume (pool). If you create multiple pools, each pool has its own encryption key.
- If the system has a lot of disks, there will be a performance hit if the CPU does not support AES-NI. Without hardware acceleration, there will be about a 20% performance hit for a single disk. Performance degradation will continue to increase with more disks. As data is written, it is automatically encrypted and as data is read, it is decrypted on the fly. If the processor does support the AES-NI instruction set, there should be very little, if any, degradation in performance when using encryption.
- Data in the ARC cache and the contents of RAM are unencrypted.
- Swap is always encrypted, even on unencrypted volumes.
- There is no way to convert an existing, unencrypted volume. Instead, the data must be backed up, the existing pool must be destroyed, a new encrypted volume must be created, and the backup restored to the new volume.
- Hybrid pools are not supported. In other words, newly created vdevs must match the existing encryption scheme. When extending a volume, Volume Manager will automatically encrypt the new vdev being added to the existing encrypted pool.
Creating an Encrypted Volume
To create an encrypted volume, check the "Enable full disk encryption" box shown in Figure 6.3e. This will automatically check and grey out the "Force 4096 bytes sector size" box as this is needed for encryption to work. It will also display a "Initialize with random data" checkbox. Checking this box will write random data to the disk before encrypting it, which can increase its cryptographic strength. However, doing so significantly adds to the time it takes to create the volume, especially if it contains several disks. After making your encryption selections, input the volume name, select the disks to add to the volume, select the Group type, and click the Add Volume button to make the encrypted volume.
Once the volume is created, it is extremely important to set a passphrase on the key, make a backup of the key, and create a recovery key. Without these, it is impossible to re-import the disks at a later time.
To perform these tasks, go to Storage → Volumes -> View Volumes. This screen is shown in Figure 6.3m.
To set a passphrase on the key, click the "Create Passphrase" button (the key shaped icon in the far right of Figure 6.3m) which will prompt to input and repeat the passphrase. Unlike a password, a passphrase can contain spaces and is typically a series of words. A good passphrase is easy to remember (like the line to a song or piece of literature) but hard to guess (people who know you should not be able to guess the passphrase).
When you set the passphrase, a warning message will remind you to create a new recovery key as a new passphrase needs a new recovery key. This way, if the passphrase is forgotten, the associated recovery key can be used instead. To create the recovery key, click the "Add recovery key" button (second last key icon in Figure 6.3m). This screen will prompt you to the location to save the key.
Finally, download a copy of the encryption key, using the "Download key" button (the first key icon in the second row in Figure 6.3m).
The passphrase, recovery key, and encryption key need to be protected. Do not reveal the passphrase to others. On the system containing the downloaded keys, take care that that system and its backups are protected. Anyone who has the keys has the ability to re-import the disks should they be discarded or stolen.
Using Volume Manager After a Volume Has Been Created
Once a volume exists, an extra “Volume to extend” field will be added to Storage → Volumes → Volume Manager, as seen in Figure 6.3f. This field is mutually exclusive with the "Volume name" field in that you can only use one or the other.
Figure 6.3f: Volume to Extend Field
This screen can be used to perform the following tasks:
1. Create another UFS or ZFS volume. Input a "Volume name" and create a volume as usual.
2. Add a ZFS log or cache device to an existing ZFS volume. Select the volume name using the drop-down menu, select the device to add, then select Log or Cache from the ZFS Extra field.
3. Extend an existing ZFS volume as described below.
NOTE: you can not extend an existing UFS volume.
When extending a volume, ZFS supports the addition of virtual devices (vdevs) to an existing volume (ZFS pool). A vdev can be a single disk, a stripe, a mirror, a RAIDZ1, RAIDZ2, or a RAIDZ3. Once a vdev is created, you can not add more drives to that vdev; however, you can stripe a new vdev (and its disks) with the same type of existing vdev in order to increase the overall size of the ZFS pool. In other words, when you extend a ZFS volume, you are really striping similar vdevs. Here are some examples:
- to extend a ZFS stripe, add one or more disks. Since there is no redundancy, you do not have to add the same amount of disks as the existing stripe.
- to extend a ZFS mirror, add the same number of drives. The resulting striped mirror is a RAID 10.
- to extend a three drive RAIDZ1, add three additional drives. The result is a RAIDZ+0, similar to RAID 50 on a hardware controller.
- to extend a RAIDZ2 requires a minimum of four additional drives. The result is a RAIDZ2+0, similar to RAID 60 on a hardware controller.
In the “Volume to extend” section, select the existing volume that you wish to stripe with. This will grey out the Volume name field and select ZFS as the filesystem type. Highlight the required number of additional disk(s), select the same type of RAID used on the existing volume, and click the Extend Volume button.
If you try to add a single disk to a vdev or a different amount of disks, a red warning message will be displayed to alert you that this is not recommended. If you are willing to create a non-optimized storage pool (not recommended!), you can override this warning by checking the "Force Volume Add" box that appears below the warning.
Creating ZFS Datasets
An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per dataset basis, allowing more granular control over access to storage data. A dataset is similar to a folder in that you can set permissions; it is also similar to a filesystem in that you can set properties such as quotas and compression as well as create snapshots.
NOTE: ZFS provides thick provisioning using quotas and thin provisioning using reserved space.
If you select an existing ZFS volume → Create ZFS Dataset, you will see the screen shown in Figure 6.3g. Table 6.3b summarizes the options available when creating a ZFS dataset.
Once a dataset is created, you can click on that dataset and select Create ZFS Dataset, thus creating a nested dataset, or a dataset within a dataset. When creating datasets, double-check that you are using the Create ZFS Dataset option for the intended volume or dataset. If you get confused when creating a dataset on a volume, click all existing datasets to close them--the remaining Create ZFS Dataset will be for the volume.
Figure 6.3g: Creating a ZFS Dataset
Table 6.3b: ZFS Dataset Options
|Compression Level||drop-down menu||choose from: Inherit, Off, lzjb, gzip level 6, gzip fastest, gzip maximum, and zle; see NOTE below|
|Enable atime||inherit, on, or off||controls whether the access time for files is updated when they are read; setting this property Off avoids producing log traffic when reading files and can result in significant performance gains|
|Quota for this dataset||integer||default of 0 is off; can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB, can also include a decimal point (e.g. 2.8G)|
|Quota for this dataset and children||integer||default of 0 is off; can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB|
|Reserved space for this dataset||integer||default of 0 is unlimited (besides hardware); can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB|
|Reserved space for this dataset and children||integer||default of 0 is unlimited (besides hardware); can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB|
|ZFS Deduplication||drop-down menu||read the section on Deduplication before making a change to this setting|
NOTE on compression: most media (e.g. .mp3, .mp4, .avi) is already compressed, meaning that you'll increase CPU utilization for no gain if you store these files on a compressed dataset. However, if you have raw .wav rips of CDs or .vob rips of DVDs, you will see a performance gain using a compressed dataset. When selecting a compression type, you need to balance performance with the amount of compression. For example, lzjb is optimized for performance while providing decent data compression. gzip varies from levels 1 to 9 where gzip fastest (level 1) gives the least compression and gzip maximum (level 9) provides the best compression but is discouraged due to its performance impact. zle is a fast and simple algorithm to eliminate runs of zeroes.
Creating a zvol
A zvol is a feature of ZFS that creates a block device over ZFS. This allows you to use a zvol as an iSCSI device extent.
To create a zvol, select an existing ZFS volume → Create ZFS Volume which will open the screen shown in Figure 6.3h.
Figure 6.3h: Creating a zvol
The configuration options are described in Table 5.3c:
Table 6.3c: zvol Configuration Options
|ZFS Volume Name||string||input a name for the zvol|
|Size||integer||specify size and value such as 10G|
|Compression Level||drop-down menu||default of Inherit means it will use the same compression level as the existing zpool used to create the zvol|
If you click Storage → Volumes → View Volumes, you can view and further configure existing volumes and datasets, as seen in the example shown in Figure 6.3i:
Figure 6.3i: Viewing Volumes
The icons towards the top of the right frame allow you to: access the volume manager, import a volume, auto import a volume, and view disks. If the system has multipath-capable hardware, an extra button will be added to view multipaths.
The eight icons associated with a ZFS volume are used to:
- Detach Volume: allows you to either detach a disk before removing it from the system (also known as a ZFS export) or to delete the contents of the volume, depending upon the choice you make in the screen that pops up when you click this button. The pop-up message, seen in Figure 6.3j, will show the current used space, provide the check box "Mark the disks as new (destroy data)", prompt you to make sure that you really want to do this, warn if the volume has any associated shares and ask if you wish to delete them, and the browser will turn red to alert you that continuing with this action will make data inaccessible. If you do not check the box to mark the disk as new, the volume will be exported (ZFS volumes only). This means that the data is not destroyed and the volume can be re-imported at a later time. If you will be moving a ZFS formatted drive from one system to another, perform this export action first. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all knowledge of the pool from the system. If you do check the box to mark the disks as new, the volume and all of its data, datasets, and zvols will be destroyed and the underlying disks will be returned to their raw state.
Figure 6.3j: Detaching or Deleting a Volume
- Scrub Volume: ZFS scrubs and how to schedule them are described in more detail in ZFS Scrubs. This button allows you to manually initiate a scrub. A scrub is I/O intensive and can negatively impact performance, meaning that you should not initiate one while the system is busy. A cancel button is provided should you need to cancel a scrub.
NOTE: if you do cancel a scrub, the next scrub will start over from the beginning, not where the cancelled scrub left off.
- Edit ZFS Options: allows you to edit the volume's compression level, atime setting, dataset quota, and reserved space for quota. If compression is newly enabled on a volume or dataset that already contains data, existing files will not be compressed until they are modified as compression is only applied when a file is written.
- Create ZFS Dataset: allows you to create a dataset.
- Create ZFS Volume: allows you to create a zvol to use as an iSCSI device extent.
- Change Permissions: allows you to edit the volume's user, group, Unix rwx permissions, type of ACL, and to enable recursive permissions on the volume's subdirectories.
- Create Snapshot: allows you to configure the snapshot's name and whether or not it is recursive before manually creating a one-time snapshot. If you wish to schedule the regular creation of snapshots, instead create a periodic snapshot task.
- Volume Status: as seen in the example in Figure 6.3k, this screen shows the device name and status of each disk in the ZFS pool as well as any read, write, or checksum errors. For each device, buttons are provided to edit the device's options (shown in Figure 6.3l), replace the device, or to offline the device.
Figure 6.3k: Volume Status
If you click a disk's Edit button in Volume Status, you will see the screen shown in Figure 6.3l:
Figure 6.3l: Editing a Disk
Table 6.3d summarizes the configurable options:
Table 6.3d: Disk Options
|Name||string||read-only value showing FreeBSD device name for disk|
|Serial||string||read-only value showing the disk's serial number|
|HDD Standby||drop-down menu||indicates the time of inactivity (in minutes) before the drive enters standby mode in order to conserve energy|
|Advanced Power Management||drop-down menu||default is Disabled, can select a power management profile from the menu|
|Acoustic Level||drop-down menu||default is Disabled, can be modified for disks that understand [AAM]|
|Enable [S.M.A.R.T]||checkbox||enabled by default if the disk supports S.M.A.R.T.|
|S.M.A.R.T. extra options||string||smartctl(8) options|
NOTE: versions of FreeNAS® prior to 8.3.1 required a reboot in order to apply changes to the HDD Standby, Advanced Power Management, and Acoustic Level settings. As of 8.3.1, changes to these settings are applied immediately.
A ZFS dataset only has five icons as the scrub volume, create ZFS volume, and volume status buttons only apply to volumes. In a dataset, the Detach Volume button is replaced with the Destroy Dataset button. If you click the Destroy Dataset button, the browser will turn red to indicate that this is a destructive action. The pop-up warning message will warn that destroying the dataset will delete all of the files and snapshots of that dataset.
Key Management for Encrypted Volumes
If you check the "Enable full disk encryption" box during the creation of a ZFS volume, five encryption icons will be added to the icons that are typically seen when viewing a volume. An example is seen in Figure 6.3m.
Figure 6.3m: Encryption Icons Associated with an Encrypted ZFS Volume
These icons are used to:
Create Passphrase: click this icon to set and confirm the passphrase associated with the GELI encryption key. Remember this passphrase as you can not re-import an encrypted volume without it. In other words, if you do not create a passphrase or you forget the passphrase, it is possible for the data on the volume to become inaccessible. An example would be a failed USB stick that requires a new installation on a new USB stick and a re-import of the existing pool, or the physical removal of disks when moving from an older hardware system to a new system. Protect this passphrase as anyone who knows it could re-import your encrypted volume, thus thwarting the reason for encrypting the disks in the first place.
When you click this icon, a red warning is displayed: Remember to add a new recovery key as this action invalidates the previous recovery key. Setting a passphrase invalidates the existing key. Once you set the passphrase, immediately click the Add recovery key button to create a new recovery key. Once the passphrase is set, the name of this icon will change to Change Passphrase.
Download Key: click this icon to download a backup copy of the GELI encryption key. Since the GELI encryption key is separate from the FreeNAS® configuration database, it is highly recommended to make a backup of the key. If the key is every lost or destroyed and there is no backup key, the data on the disks is inaccessible.
Encryption Re-key: generates a new GELI encryption key. This requires the passphrase for the current key. Typically this is only performed when the administrator suspects that the current key may be compromised.
Add recovery key: generates a new recovery key and prompts for a location to download a backup copy of the recovery key. This recovery key can be used if the passphrase is forgotten. Always immediately add a recovery key whenever the passphrase is changed.
Remove recover key: Typically this is only performed when the administrator suspects that the current recovery key may be compromised. Immediately create a new passphrase and recovery key.
Storage → Volumes → View Disks allows you to view all of the disks recognized by the FreeNAS® system. An example is shown in Figure 6.3n.
Figure 6.3n: Viewing Disks
For each device, the current configuration of the options described in Table 6.3d is displayed. Click a disk's Edit button to change its configuration.
The Wipe button is used to blank a disk and will provide a progress bar of the wipe's status. Use this option before discarding a disk.
NOTE: to determine the serial number of a disk when it is not displayed in this screen, use the smartctl command within Shell. For example, to determine the serial number of disk ada0, type smartctl -a /dev/ada0 | grep Serial.
Setting permissions is an important aspect of configuring volumes. The graphical administrative interface is meant to set the initial permissions for a volume or dataset in order to make it available as a share. Once a share is available, the client operating system should be used to fine-tune the permissions of the files and directories that are created by the client.
The sections in Sharing contain configuration examples for several types of permission scenarios. This section provides an overview of the screen that is used to set permissions.
Once a volume or dataset is created, it will be listed by its mount point name in Storage → Volumes → View Volumes. If you click the Change Permissions icon for a specific volume/dataset, you will see the screen shown in Figure 6.3o. Table 6.3e summarizes the options in this screen.
Figure 6.3o: Changing Permissions on a Volume or Dataset
Table 6.3e: Options When Changing Permissions
|Owner (user)||drop-down menu||user to control the volume/dataset; users which were manually created or imported from Active Directory or LDAP will appear in drop-down menu|
|Owner (group)||drop-down menu||group to control the volume/dataset; groups which were manually created or imported from Active Directory or LDAP will appear in drop-down|
|Mode||checkboxes||check the desired Unix permissions for user, group, and other|
|Type of ACL||bullet selection||Unix and Windows ACLs are mutually exclusive, this means that you must select the correct type of ACL to match the share; see the paragraph below the table for more details|
|Set permission recursively||checkbox||if checked, permissions will also apply to subdirectories of the volume or dataset; if data already exists on the volume/dataset, it is recommended to instead change the permissions recursively on the client side to prevent a performance lag on the FreeNAS® system|
When in doubt, or if you have a mix of operating systems in your network, select Unix ACLs as all clients understand them. Windows ACLs are appropriate when the network contains only Windows clients and are the preferred option within an Active Directory domain. Windows ACLs add a superset of permissions that augment those provided by Unix ACLs. While Windows clients also understand Unix ACLs, they won't benefit from the extra permissions provided by Active Directory and Windows ACLs when Unix ACLs are used.
NOTE: if you change your mind about the type of ACL, you do not have to recreate the volume. That is, existing data is not lost if the type of ACL is changed. However, if you change from Windows ACLs to Unix ACLs, the extended permissions provided by Windows ACLs will be removed from the existing files.
FreeNAS® uses gmultipath(8) to provide multipath I/O support on systems containing hardware that is capable of multipath. An example would be a dual SAS expander backplane in the chassis or an external JBOD.
Multipath hardware adds fault tolerance to a NAS as the data is still available even if one disk I/O path has a failure.
FreeNAS® automatically detects active/active and active/passive multipath-capable hardware. Any multipath-capable devices that are detected will be placed in multipath units with the parent devices hidden. The configuration will be displayed in Storage → Volumes → View Multipaths, as seen in the example in Figure 6.3p. Note that this option will not be displayed in the Storage → Volumes tree on systems that do not contain multipath-capable hardware.
Figure 6.3p: Viewing Multipaths
Figure 6.3p provides an example of a system with a SAS ZIL and a SAS hard drive. The ZIL device is capable of active/active writes, whereas the hard drive is capable of active/read.
Replacing a Failed Drive or Zil Device
If you are using any form of redundant RAID, you should replace a failed drive as soon as possible to repair the degraded state of the RAID. Depending upon the capability of your hardware, you may or may not need to reboot in order to replace the disk. AHCI capable hardware does not require a reboot.
NOTE: a stripe (RAID0) does not provide redundancy. If you lose a disk in a stripe, the data on the stripe is lost.
Before physically removing the failed drive or ZIL device, go to Storage → Volumes → View Volumes → Volume Status and locate the failed disk. Once you have located the failed disk in the GUI, perform the following steps:
1. If the disk is formatted with ZFS, click the disk's Offline button in order to change its status to OFFLINE. This step is needed to properly remove the device from the ZFS pool and to prevent swap issues. If your hardware supports hot-pluggable disks, click the disk's Offline button, pull the disk, then skip to step 3.
NOTE: if the process of changing the disk's status to OFFLINE fails with a "disk offline failed - no valid replicas" message, you will need to scrub the ZFS volume first using its Scrub Volume button in Storage → Volumes → View Volumes. Once the scrub completes, try to Offline the disk again before proceeding.
2. If the hardware is not AHCI capable, shutdown the system in order to physically replace the disk. When finished, return to the GUI and locate the OFFLINE disk.
3. Once the disk is showing as OFFLINE, click the disk's Replace button. Select the replacement disk from the drop-down menu and click the Replace Disk button. If the disk is being added to a ZFS pool, it will start to resilver. You can use the zpool status command in Shell to monitor the status of the resilvering.
NOTE: if the ZFS volume is encrypted, you will need to input the passphrase in order to offline the disk.
4. If the replaced disk continues to be listed after resilvering is complete, use the Detach button to remove the disk from the list.
In the example shown in Figure 6.3q, failed disk ada0 is being replaced by disk ada3.
Figure 6.3q: Replacing a Failed Disk
Replacing Drives to Grow a ZFS Pool
The recommended method for expanding the size of a ZFS pool is to pre-plan the number of disks in a vdev and to stripe additional vdevs using Volume Manager as additional capacity is needed.
However, this is not an option if you do not have open drive ports or the ability to add a SAS/SATA HBA card. In this case, you can replace one disk at a time with a larger disk, wait for the resilvering process to incorporate the new disk into the pool completes, then repeat with another disk until all of the disks have been replaced. This process is slow and places the system in a degraded state. Since a failure at this point could be disastrous, do not attempt this method unless the system has a reliable backup.
NOTE: this method requires the ZFS property autoexpand. This property became available starting with FreeNAS® version 8.2.0. If you are running an earlier version of FreeNAS®, upgrade before attempting this method.
Check and verify that the autoexpand property is enabled before attempting to grow the pool. If it is not, the pool will not recognize that the disk capacity has increased. By default, this property is enabled in FreeNAS® version 8.3.1. To verify the property, use Shell. This example checks the ZFS volume named Vol1:
zpool get all Vol1 NAME PROPERTY VALUE SOURCE Vol1 size 4.53T - Vol1 capacity 31% - Vol1 altroot /mnt local Vol1 health ONLINE - Vol1 guid 8068631824452460057 default Vol1 version 28 default Vol1 bootfs - default Vol1 delegation on default Vol1 autoreplace off default Vol1 cachefile /data/zfs/zpool.cache local Vol1 failmode wait default Vol1 listsnapshots off default Vol1 autoexpand on local Vol1 dedupditto 0 default Vol1 dedupratio 1.00x - Vol1 free 3.12T - Vol1 allocated 1.41T - Vol1 readonly off - Vol1 comment - default
If autoexpansion is not enabled, enable it by specifying the name of the ZFS volume:
zpool set autoexpand=on Vol1
Verify that autoexpand is now enabled by repeating zpool get all Vol1.
You are now ready to replace one drive with a larger drive using the instructions in Replacing a Failed Drive or ZIL Device.
Replace one drive at a time and wait for the resilver process to complete on the replaced drive before replacing the next drive. Once all the drives are replaced and the resilver completes, you should see the added space in the pool.
You can view the status of the resilver process by running zpool status Vol1.
Enabling ZFS Pool Expansion After Drive Replacement
It is recommended to enable the autoexpand property before you start replacing drives. If the property is not enabled before replacing some or all of the drives, extra configuration is needed to inform ZFS of the expanded capacity.
Verify that autoexpand is set as described in the previous section. Then, bring each of the drives back online with the following command, replacing the volume name and GPT ID for each disk in the ZFS pool:
zpool online -e Vol1 gptid/xxx
Online one drive at a time and check the status using the following example. If a drive starts to resilver, you need to wait for the resilver to complete before preceding to online the next drive.
To find the gptid information for the drives, use glabel status or zpool status [Pool_Name] which will also show you if any drives are failed or in the process of being resilvered:
zpool status Vol1 pool: Vol1 state: ONLINE scan: scrub repaired 0 in 16h24m with 0 errors on Sun Mar 10 17:24:20 2013 config: NAME STATE READ WRITE CKSUM Vol1 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/d5ed48a4-634a-11e2-963c-00e081740bfe ONLINE 0 0 0 gptid/03121538-62d9-11e2-99bd-00e081740bfe ONLINE 0 0 0 gptid/252754e1-6266-11e2-8088-00e081740bfe ONLINE 0 0 0 gptid/9092045a-601d-11e2-892e-00e081740bfe ONLINE 0 0 0 gptid/670e35bc-5f9a-11e2-92ca-00e081740bfe ONLINE 0 0 0 errors: No known data errors
After onlining all of the disks, type zpool status to see if the drives start to resilver. If this happens, wait for the resilvering process to complete.
Next, export and then import the pool:
zpool export Vol1
zpool import -R /mnt Vol1
Once the import completes, all of the drive space should be available. Verify that the increased size is recognized:
zpool list Vol1 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT Vol1 9.06T 1.41T 7.24T 31% 1.00x ONLINE /mnt
Splitting a Mirrored ZFS Storage Pool
ZFSv28 provides the ability to to split a mirrored storage pool, which detaches a disk or disks in the original ZFS volume in order to create another identical ZFS volume on another system.
NOTE: zpool split only works on mirrored ZFS volumes.
In this example, a ZFS mirror named test contains three drives:
zpool status pool: test state: ONLINE scan: resilvered 568K in 0h0m with 0 errors on Wed Jul 6 16:10:58 2011 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da0 ONLINE 0 0 0 da4 ONLINE 0 0 0
The following command splits from the existing three disk mirror test a new ZFS volume named migrant containing one disk, da4. Disks da0 and da1 remain in test.
zpool split test migrant da4
At this point, da4 can be physically removed and installed to a new system as the new pool is exported as it is created. Once physically installed, import the identical pool on the new system:
zpool import migrant
This makes the ZFS volume migrant available with a single disk. Be aware that properties come along with the clone, so the new pool will be mounted where the old pool was mounted if the mountpoint property was set on the original pool.
Verify the status of the new pool:
zpool status pool: migrant state: ONLINE scan: resilvered 568K in 0h0m with 0 errors on Wed Jul 6 16:10:58 2011 config: NAME STATE READ WRITE CKSUM migrant ONLINE 0 0 0 da4 ONLINE 0 0 0 errors: No known data errors
On the original system, the status now looks like this:
zpool status pool: test state: ONLINE scan: resilvered 568K in 0h0m with 0 errors on Wed Jul 6 16:10:58 2011 config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da0 ONLINE 0 0 0 errors: No known data errors
At this point, it is recommended to add disks to create a full mirror set. This example adds two disks named da2 and da3:
zpool attach migrant da4 da2 zpool attach migrant da4 da3
The migrant volume now looks like this:
zpool status pool: migrant state: ONLINE scan: resilvered 572K in 0h0m with 0 errors on Wed Jul 6 16:43:27 2011 config: NAME STATE READ WRITE CKSUM migrant ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da4 ONLINE 0 0 0 da2 ONLINE 0 0 0 da3 ONLINE 0 0 0
Now that the new system has been cloned, you can detach da4 and install it back to the original system. Before physically removing the disk, run this command on the new system:
zpool detach migrant da4
Once the disk is physically re-installed, run this command on the original system:
zpool attach orig da0 da4
Should you ever need to create a new clone, remember to remove the old clone first:
zpool destroy migrant