Volumes

From Freenas
Jump to: navigation, search

Since the storage disks are separate from the FreeNAS® operating system, you do not actually have a NAS (network-attached storage) system until you configure your disks into at least one volume. The FreeNAS® graphical interface supports the creation of both UFS and ZFS volumes. ZFS volumes are recommended to get the most out of your FreeNAS® system.

NOTE: in ZFS terminology, the storage that is managed by ZFS is referred to as a pool. When configuring the ZFS pool using the FreeNAS® graphical interface, the term volume is used to refer to either a UFS volume or a ZFS pool.

Proper storage design is important for any NAS. It is recommended that you read through this entire chapter first, before configuring your storage disks, so that you are aware of all of the possible features, know which ones will benefit your setup most, and are aware of any caveats or hardware restrictions.

Contents

Auto Importing Volumes

If you click Storage → Volumes → Auto Import Volume, you can configure FreeNAS® to use an existing software UFS or ZFS RAID volume. This action is typically performed when an existing FreeNAS® system is re-installed (rather than upgraded). Since the operating system is separate from the disks, a new installation does not affect the data on the disks; however, the new operating system needs to be configured to use the existing volume.

Supported volumes are UFS stripes (RAID0), UFS mirrors (RAID1), UFS RAID3, as well as existing ZFS pools. UFS RAID5 is not supported as it is an unmaintained summer of code project which was never integrated into FreeBSD.

Beginning with version 8.3.1, the import of existing GELI-encrypted ZFS pools is also supported. However, the pool must be decrypted before it can be imported.

Figure 6.3a shows the initial pop-up window that appears when you select to auto import a volume.

Figure 6.3a: Initial Auto Import Volume Screen

Auto2c.png

If you are importing a UFS RAID or an existing, unencrypted ZFS pool, select "No: Skip to import" to access the screen shown in Figure 6.3b.

Figure 6.3b: Auto Importing a Non-Encrypted Volume

Auto1f.png

Existing software RAID volumes should be available for selection from the drop-down menu. In the example shown in Figure 6.3b, the FreeNAS® system has detected an existing, unencrypted ZFS pool. Once the volume is selected, click the OK button to import the volume.

FreeNAS® will not import a dirty volume. If an existing UFS RAID does not show in the drop-down menu, you will need to fsck the volume.

If an existing ZFS pool does not show in the drop-down menu, run zpool import from Shell to import the pool.

If you plan to physically install ZFS formatted disks from another system, be sure to export the drives on that system to prevent an "in use by another machine" error during the import.

If you suspect that your hardware is not being detected, run camcontrol devlist from Shell. If the disk does not appear in the output, check to see if the controller driver is supported or if it needs to be loaded by creating a tunable.

Auto Importing a GELI-Encrypted ZFS Pool

If you are importing an existing GELI-encrypted ZFS pool, you must decrypt the disks before importing the pool. In Figure 6.3a, select “Yes: Decrypt disks” to access the screen shown in Figure 6.3c.

Figure 6.3c: Decrypting the Disks Before Importing the ZFS Pool

Decrypt.png

Select the disks in the pool, browse to the location of the saved encryption key, input the passphrase associated with the key, then click OK to decrypt the disks.

NOTE: the encryption key is required to decrypt the pool. If the pool can not be decrypted, it can not be re-imported after a failed upgrade or lost configuration. This means that it is very important to save a copy of the key and to remember the passphrase that was configured for the key. The View Volumes screen is used to manage the keys for encrypted volumes.

Once the pool is decrypted, it should appear in the drop-down menu of Figure 6.3b. Click the OK button to finish the volume import.

Importing Volumes

The Volume → Import Volume screen, shown in Figure 6.3d, is used to import a single disk or partition that has been formatted with a supported filesystem. FreeNAS® supports the import of disks that have been formatted with UFS, NTFS, MSDOS, or EXT2. The import is meant to be a temporary measure in order to copy the data from a disk to a volume. Only one disk can be imported at a time.

Figure 6.3d: Importing a Volume

Volume4e.png

Input a name for the volume, use the drop-down menu to select the disk or partition that you wish to import, and select the type of filesystem on the disk.

Before importing a disk, be aware of the following caveats:

  • FreeNAS® will not import a dirty filesystem. If a supported filesystem does not show in the drop-down menu, you will need to fsck or run a disk check on the filesystem.
  • FreeNAS® can not import dynamic NTFS volumes at this time. A future version of FreeBSD may address this issue.
  • if an NTFS volume will not import, try ejecting the volume safely from a Windows system. This will fix some journal files that are required to mount the drive.

UFS Volume Manager

While the UFS filesystem is supported, it is not recommended as it does not provide any ZFS features such as compression, encryption, deduplication, copy-on-write, lightweight snapshots, or the ability to provide early detection and correction of corrupt data. If you are using UFS as a temporary solution until you can afford better hardware, note that you will have to destroy your existing UFS volume in order to create a ZFS pool, then restore your data from backup.

NOTE: it is not recommended to create a UFS volume larger than 5TB as it will be inefficient to fsck, causing long delays at system boot if the system was not shutdown cleanly.

To format your disks with UFS, go to Storage → Volumes → UFS Volume Manager (legacy) which will open the screen shown in Figure 6.3e.

Figure 6.3e: Creating a UFS Volume

Ufs1c.png

Table 6.3a summarizes the available options.

Table 6.3a: Options When Creating a UFS Volume

Setting Value Description
Volume name string mandatory; it is recommended to choose a name that will stick out in the logs (e.g. not data or freenas)
Member disks selection use the mouse to select the disk(s) to be used; to select multiple disks, highlight the first disk, then hold the shift key as you highlight the last disk.
Specify custom path checkbox optional; useful for creating a /var for persistent log storage
Path string only available when Specify custom path is checked; must be full name of volume (e.g. /mnt/var) and if no path is provided, it will append the Volume name to /mnt

The Add Volume button warns that creating a volume destroys all existing data on selected disk(s). In other words, creating storage using UFS Volume Manager is a destructive action that reformats the selected disks. If your intent is to not overwrite the data on an existing volume, see if the volume format is supported by the auto-import or import actions. If so, perform the supported action instead. If the current storage format is not supported, you will need to backup the data to an external media, format the disks, then restore the data to the new volume.

ZFS Volume Manager

If you have unformatted disks or wish to overwrite the filesystem (and data) on your disks, use the ZFS Volume Manager to format the desired disks into a ZFS pool.

If you are new to RAID concepts or would like an overview of the differences between hardware RAID and ZFS RAIDZ*, skim through the section on Hardware Recommendations before using ZFS Volume Manager.

If you click on Storage → Volumes → ZFS Volume Manager, you will see a screen similar to the example shown in Figure 6.3f.

Figure 6.3f: Creating a ZFS Pool Using Volume Manager

Volume6.3c.png

Table 6.3b summarizes the configuration options of this screen.

Table 6.3b: Options When Creating a ZFS Pool

Setting Value Description
Volume name string ZFS volumes must conform to these naming conventions; it is recommended to choose a name that will stick out in the logs (e.g. not data or freenas)
Volume to extend drop-down menu requires an existing ZFS pool to extend; see Extending a ZFS Volume for instructions
Encryption checkbox read the section on Encryption before choosing to use encryption
Available disks display displays the size of available disks; hover over show to list the available device names
Volume layout drag and drop click and drag the icon to select the desired number of disks
Add Extra Device button select to configure multiple pools or to add log or cache devices during pool creation

To configure the pool, drag the slider to select the desired number of disks. The ZFS Volume Manager will automatically select the optimal configuration and the resulting storage capacity, which takes swap into account, will be displayed. If you wish to change the layout or the number of disks, use the mouse to drag the slider to the desired volume layout. The drop-down menu showing the optimal configuration can also be clicked to change the configuration, though the GUI will turn red if the selected configuration is not recommended.

NOTE: for performance and capacity reasons, this screen will not allow you to create a volume from disks of differing sizes. While it is not recommended, it is possible to create a volume in this situation by using the “Manual setup” button and following the instructions in Manual Volume Creation.

ZFS Volume Manager will allow you to save save a non-optimal configuration. It will still work, but will perform less efficiently than an optimal configuration. However, the GUI will not allow you to select a configuration if the number of disks selected is not enough to create that configuration. Click the tool tip icon to access a link to this documentation.

The Add Volume button warns that creating a volume destroys any existing data on the selected disk(s). In other words, creating a new volume reformats the selected disks. If your intent is to not overwrite the data on an existing volume, see if the volume format is supported by the auto-import or import actions. If so, perform the supported action instead. If the current storage format is not supported, you will need to backup the data to an external media, format the disks, then restore the data to the new volume.

The ZFS Volume Manager will automatically select the optimal layout for the new pool, depending upon the number of disks selected. The following formats are supported:

  • Stripe: requires at least one disk
  • Mirror: requires at least two disks
  • RAIDZ1: requires at least three disks
  • RAIDZ2: requires at least four disks
  • RAIDZ3: requires at least five disks
  • log device: add a dedicated log device (slog)
  • cache device: add a dedicated cache device

If you have more than five disks and are using ZFS, consider the number of disks to use for best performance and scalability. An overview of the various RAID levels and recommended disk group sizes can be found in the RAID Overview section. More information about log and cache devices can be found in the ZFS Overview section.

Depending upon the size and number of disks, the type of controller, and whether or not encryption is selected, creating the volume may take some time. Once the volume is created, the screen will refresh and the new volume will be listed under Storage → Volumes.

Encryption

Beginning with 8.3.1, FreeNAS® supports GELI full disk encryption when creating ZFS volumes. It is important to understand the following when considering whether or not encryption is right for your FreeNAS® system:

  • This is not the encryption method used by Oracle ZFSv30. That version of ZFS has not been open sourced and is the property of Oracle.
  • This is full disk encryption and not per-filesystem encryption. The underlying drives are first encrypted, then the pool is created on top of the encrypted devices.
  • This type of encryption is primarily targeted at users who store sensitive data and want to retain the ability to remove disks from the pool without having to first wipe the disk's contents.
  • This design is only suitable for safe disposal of disks independent of the encryption key. As long as the key and the disks are intact, the system is vulnerable to being decrypted. The key should be protected by a strong passphrase and any backups of the key should be securely stored.
  • On the other hand, if the key is lost, the data on the disks is inaccessible. Always backup the key!

IMPORTANT NOTE: the per-drive GELI master keys are not backed up along with with the user keys. If a bit error occurs in the last sector of an encrypted disk, this may mean the data on that disk is completely lost. Until this issue is resolved, it is important to read this forum post which explains how to back up your master keys manually. This forum post gives an in-depth explanation of how the various key types are used by GELI. To track future progress on this issue, refer to this bug report.

  • The encryption key is per ZFS volume (pool). If you create multiple pools, each pool has its own encryption key.
  • If the system has a lot of disks, there will be a performance hit if the CPU does not support AES-NI or if no crypto hardware is installed. Without hardware acceleration, there will be about a 20% performance hit for a single disk. Performance degradation will continue to increase with more disks. As data is written, it is automatically encrypted and as data is read, it is decrypted on the fly. If the processor does support the AES-NI instruction set, there should be very little, if any, degradation in performance when using encryption. This forum post compares the performance of various CPUs.
  • Data in the ARC cache and the contents of RAM are unencrypted.
  • Swap is always encrypted, even on unencrypted volumes.
  • There is no way to convert an existing, unencrypted volume. Instead, the data must be backed up, the existing pool must be destroyed, a new encrypted volume must be created, and the backup restored to the new volume.
  • Hybrid pools are not supported. In other words, newly created vdevs must match the existing encryption scheme. When extending a volume, Volume Manager will automatically encrypt the new vdev being added to the existing encrypted pool.

NOTE: the encryption facility used by FreeNAS® is designed to protect against physical theft of the disks. It is not designed to protect against unauthorized software access. Ensure that only authorized users have access to the administrative GUI and that proper permissions are set on shares if sensitive data stored on the system.

Creating an Encrypted Volume

To create an encrypted volume, check the "Encryption" box shown in Figure 6.3f. Input the volume name, select the disks to add to the volume, and click the Add Volume button to make the encrypted volume.

Once the volume is created, it is extremely important to set a passphrase on the key, make a backup of the key, and create a recovery key. Without these, it is impossible to re-import the disks at a later time.

To perform these tasks, go to Storage → Volumes -> View Volumes. This screen is shown in Figure 6.3o.

To set a passphrase on the key, click the volume name and then the "Create Passphrase" button (the key shaped icon in Figure 6.3o). You will be prompted to input the password used to access the FreeNAS® administrative GUI, and then to input and repeat the desired passphrase. Unlike a password, a passphrase can contain spaces and is typically a series of words. A good passphrase is easy to remember (like the line to a song or piece of literature) but hard to guess (people who know you should not be able to guess the passphrase).

When you set the passphrase, a warning message will remind you to create a new recovery key as a new passphrase needs a new recovery key. This way, if the passphrase is forgotten, the associated recovery key can be used instead. To create the recovery key, click the "Add recovery key" button (second last key icon in Figure 6.3o). This screen will prompt you to input the password used to access the FreeNAS® administrative GUI and then to select the directory in which to save the key. Note that the recovery key is saved to the client system, not on the FreeNAS® system.

Finally, download a copy of the encryption key, using the "Download key" button (the key icon with a down arrow in Figure 6.3o). Again, the encryption key is saved to the client system, not on the FreeNAS® system. You will be prompted to input the password used to access the FreeNAS® administrative GUI before the selecting the directory in which to store the key.

The passphrase, recovery key, and encryption key need to be protected. Do not reveal the passphrase to others. On the system containing the downloaded keys, take care that that system and its backups are protected. Anyone who has the keys has the ability to re-import the disks should they be discarded or stolen.

Manual Volume Creation

The "Manual Setup" button shown in Figure 6.3f can be used to create a non-optimal ZFS volume. While this is not recommended, it can, for example, be used to create a volume containing disks of different sizes or to put more than the recommended number of disks into a vdev.

NOTE: when using disks of differing sizes, the volume is limited by the size of the smallest disk. When using more disks than are recommended for a vdev, you increase resilvering time and the risk that more than the allowable number of disks will fail before a resilver completes. For these reasons, it is recommended to instead let the ZFS Volume Manager create an optimal pool for you, as described in ZFS Volume Manager, using same-size disks.

Figure 6.3g shows the "Manual Setup" screen and Table 6.3c summarizes the available options.

Figure 6.3g: Creating a Non-Optimal ZFS Volume

Manual1.png

Table 6.3c: Manual Setup Options

Setting Value Description
Volume name string ZFS volumes must conform to these naming conventions; it is recommended to choose a name that will stick out in the logs (e.g. not data or freenas)
Encryption checkbox read the section on Encryption before choosing to use encryption
Member disks list highlight desired number of disks from list of available disks
Deduplication drop-down menu choices are Off, Verify, and On; carefully consider the section on Deduplication before changing this setting
ZFS Extra bullet selection used to specify if disk is used for storage ("None"), a log device, a cache device, or a spare

Extending a ZFS Volume

The “Volume to extend” drop-down menu in Storage → Volumes → ZFS Volume Manager, shown in Figure 6.3h, can be used to add additional disks to an existing ZFS volume. This drop-down empty will be empty if an existing ZFS volume does not exist.

Figure 6.3h: Volume to Extend Field

Extend1f.png

NOTE: if the existing volume is encrypted, a warning message will remind you that the operation of extending a volume will reset the passphrase and recovery key. After extending the volume, you should immediately recreate both.

Once an existing volume has been selected from the drop-down menu, drag and drop the desired disk(s) and select the desired volume layout. For example you can:

  • select an SSD or disk with a volume layout of Log (ZIL) to add a log device to the ZFS pool. Selecting 2 SSDs or disks will mirror the log device.
  • select an SSD or disk with a volume layout of Cache (L2ARC) to add a cache device to the ZFS pool.
  • add additional disks to increase the capacity of the ZFS pool. The caveats to doing this are described below.

When adding disks to increase the capacity of a volume, ZFS supports the addition of virtual devices, known as vdevs, to an existing ZFS pool. A vdev can be a single disk, a stripe, a mirror, a RAIDZ1, RAIDZ2, or a RAIDZ3. Once a vdev is created, you can not add more drives to that vdev; however, you can stripe a new vdev (and its disks) with the same type of existing vdev in order to increase the overall size of the ZFS pool. In other words, when you extend a ZFS volume, you are really striping similar vdevs. Here are some examples:

  • to extend a ZFS stripe, add one or more disks. Since there is no redundancy, you do not have to add the same amount of disks as the existing stripe.
  • to extend a ZFS mirror, add the same number of drives. The resulting striped mirror is a RAID 10. For example, if you have 10 drives, you could start by creating a mirror of two drives, extending this mirror by creating another mirror of two drives, and repeating three more times until all 10 drives have been added.
  • to extend a three drive RAIDZ1, add three additional drives. The result is a RAIDZ+0, similar to RAID 50 on a hardware controller.
  • to extend a RAIDZ2 requires a minimum of four additional drives. The result is a RAIDZ2+0, similar to RAID 60 on a hardware controller.

If you try to add an incorrect number of disks to the existing vdev, an error message will appear, indicating the number of disks that are needed. You will need to select the correct number of disks in order to continue.

Creating ZFS Datasets

An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per dataset basis, allowing more granular control over access to storage data. A dataset is similar to a folder in that you can set permissions; it is also similar to a filesystem in that you can set properties such as quotas and compression as well as create snapshots.

NOTE: ZFS provides thick provisioning using quotas and thin provisioning using reserved space.

If you select an existing ZFS volume → Create ZFS Dataset, you will see the screen shown in Figure 6.3i.

Once a dataset is created, you can click on that dataset and select Create ZFS Dataset, thus creating a nested dataset, or a dataset within a dataset. You can also create a zvol within a dataset. When creating datasets, double-check that you are using the Create ZFS Dataset option for the intended volume or dataset. If you get confused when creating a dataset on a volume, click all existing datasets to close them -- the remaining Create ZFS Dataset will be for the volume.

Figure 6.3i: Creating a ZFS Dataset

Volume3d.png

Table 6.3d summarizes the options available when creating a ZFS dataset. Some settings are only available in Advanced Mode. To see these settings, either click the Advanced Mode button or configure the system to always display these settings by checking the box “Show advanced fields by default” in System → Settings → Advanced. These attributes can also be changed after dataset creation in Storage → Volumes → View Volumes.

Table 6.3d: ZFS Dataset Options

Setting Value Description
Dataset Name string mandatory
Compression Level drop-down menu see Compression for a comparison of the available algorithms
Enable atime inherit, on, or off controls whether the access time for files is updated when they are read; setting this property Off avoids producing log traffic when reading files and can result in significant performance gains
Quota for this dataset integer only available in Advanced Mode; default of 0 is off; can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB, can also include a decimal point (e.g. 2.8G)
Quota for this dataset and all children integer only available in Advanced Mode; default of 0 is off; can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB
Reserved space for this dataset integer only available in Advanced Mode; default of 0 is unlimited (besides hardware); can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB
Reserved space for this dataset and all children integer only available in Advanced Mode; default of 0 is unlimited (besides hardware); can specify M (megabyte), G (gigabyte), or T (terabyte) as in 20G for 20 GB
ZFS Deduplication drop-down menu read the section on Deduplication before making a change to this setting
Record Size drop-down menu only available in Advanced Mode; while ZFS automatically adapts the record size dynamically to adapt to data, if the data has a fixed size (e.g. a database), setting the Record Size may result in better performance

Deduplication

The ZFS Deduplication option warns that enabling dedup may have drastic performance implications and that compression should be used instead. Before checking the deduplication box, read the section on deduplication in the ZFS Overview first. This article provides a good description of the value v.s. cost considerations for deduplication.

Unless you have a lot of RAM and a lot of duplicate data, do not change the default deduplication setting of "Off". The dedup tables used during deduplication need ~8 GB of RAM per 1TB of data to be deduplicated. For performance reasons, consider using compression rather than turning this option on.

If deduplication is changed to On, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared among files. If deduplication is changed to Verify, ZFS will do a byte-to-byte comparison when two blocks have the same signature to make sure that the block contents are identical. Since hash collisions are extremely rare, verify is usually not worth the performance hit.

NOTE: once deduplication is enabled, the only way to disable it is to use the zfs set dedup=off dataset_name command from Shell. However, any data that is already stored as deduplicated will not be un-deduplicated as only newly stored data after the property change will not be deduplicated. The only way to remove existing deduplicated data is to copy all of the data off of the dataset, set the property to off, then copy the data back in again. Alternately, create a new dataset with the ZFS Deduplication left as disabled, copy the data to the new dataset, and destroy the original dataset.

Compression

Most media (e.g. .mp3, .mp4, .avi) is already compressed, meaning that you will increase CPU utilization for no gain if you store these files on a compressed dataset. However, if you have raw .wav rips of CDs or .vob rips of DVDs, you will see a performance gain using a compressed dataset. When selecting a compression type, you need to balance performance with the amount of compression. The following compression algorithms are supported:

  • lz4: recommended compression method as it allows compressed datasets to operate at near real-time speed.
  • gzip: varies from levels 1 to 9 where gzip fastest (level 1) gives the least compression and gzip maximum (level 9) provides the best compression but is discouraged due to its performance impact.
  • zle: fast and simple algorithm to eliminate runs of zeroes.
  • lzjb: provides decent data compression, but is considered deprecated as lz4 provides much better performance.

If you leave the default of Inherit, the dataset will inherit from the parent. Unless the parent dataset has been modified, its default compression level is lz4.

If you select Off, compression will not be used on the dataset.

Creating a zvol

A zvol is a feature of ZFS that creates a block device over ZFS. This allows you to use a zvol as an iSCSI device extent.

To create a zvol, select an existing ZFS volume or dataset → Create zvol which will open the screen shown in Figure 6.3j.

Figure 6.3j: Creating a zvol

Zvol1d.png

The configuration options are described in Table 6.3e. Some settings are only available in Advanced Mode. To see these settings, either click the Advanced Mode button or configure the system to always display these settings by checking the box “Show advanced fields by default” in System → Settings → Advanced.

Table 6.3e: zvol Configuration Options

Setting Value Description
zvol Name string input a name for the zvol
Size for this zvol integer specify size and value such as 10G
Compression level drop-down menu default of Inherit means it will use the same compression level as the existing zpool used to create the zvol
Sparse volume checkbox used to provide thin provisioning; if this option is selected, writes will fail when the pool is low on space
Block size integer only available in Advanced Mode; valid size is any power of 2 from 512b to 128kb with a default size of 8kb; can be set to match the block size of the filesystem which will be formatted onto the iSCSI target

Viewing Disks

Storage → Volumes → View Disks allows you to view all of the disks recognized by the FreeNAS® system. An example is shown in Figure 6.3k.

Figure 6.3k: Viewing Disks

Volume5g.png

For each device, the current configuration of the options described in Table 6.3f is displayed. Click a disk's entry and then its Edit button to change its configuration.

Clicking a disk's entry will also display its Wipe button which can be used to blank a disk while providing a progress bar of the wipe's status. Use this option before discarding a disk.

NOTE: should a disk's serial number not be displayed in this screen, use the smartctl command within Shell. For example, to determine the serial number of disk ada0, type smartctl -a /dev/ada0 | grep Serial.

Viewing Volumes

If you click Storage → Volumes → View Volumes, you can view and further configure existing volumes, ZFS datasets, and zvols. The example shown in Figure 6.3l demonstrates one ZFS volume with two datasets and one zvol.

Figure 6.3l: Viewing Volumes

Volume5h.png

Buttons are provided to provide quick access to ZFS Volume Manager, UFS Volume Manager, Import Volume, Auto Import Volume, and View Disks. If the system has multipath-capable hardware, an extra button will be added to View Multipaths.

If you click the entry for a ZFS volume, eight icons will appear at the bottom of the screen. In order from left to right, these icons allow you to:

1. Detach Volume: allows you to either detach a disk before removing it from the system (also known as a ZFS export) or to delete the contents of the volume, depending upon the choice you make in the screen that pops up when you click this button. The pop-up message, seen in Figure 6.3m, will show the current used space, provide the check box "Mark the disks as new (destroy data)", prompt you to make sure that you really want to do this, warn if the volume has any associated shares and ask if you wish to delete them, and the browser will turn red to alert you that continuing with this action will make data inaccessible. If you do not check the box to mark the disk as new, the volume will be exported (ZFS volumes only). This means that the data is not destroyed and the volume can be re-imported at a later time. If you will be moving a ZFS formatted drive from one system to another, perform this export action first. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all knowledge of the pool from the system. If you do check the box to mark the disks as new, the volume and all of its data, datasets, and zvols will be destroyed and the underlying disks will be returned to their raw state.

Figure 6.3m: Detaching or Deleting a Volume

Export1d.png

2. Scrub Volume: ZFS scrubs and how to schedule them are described in more detail in ZFS Scrubs. This button allows you to manually initiate a scrub. A scrub is I/O intensive and can negatively impact performance, meaning that you should not initiate one while the system is busy. A cancel button is provided should you need to cancel a scrub.

NOTE: if you do cancel a scrub, the next scrub will start over from the beginning, not where the cancelled scrub left off.

3. Edit ZFS Options: allows you to edit the volume's compression level, atime setting, dataset quota, and reserved space for quota. If compression is newly enabled on a volume or dataset that already contains data, existing files will not be compressed until they are modified as compression is only applied when a file is written.

4. Create ZFS Dataset: allows you to create a dataset.

5. Create zvol: allows you to create a zvol to use as an iSCSI device extent.

6. Change Permissions: allows you to edit the volume's user, group, Unix rwx permissions, type of ACL, and to enable recursive permissions on the volume's subdirectories.

7. Create Snapshot: allows you to configure the snapshot's name and whether or not it is recursive before manually creating a one-time snapshot. If you wish to schedule the regular creation of snapshots, instead create a periodic snapshot task.

8. Volume Status: as seen in the example in Figure 6.3n, this screen shows the device name and status of each disk in the ZFS pool as well as any read, write, or checksum errors. It also indicates the status of the latest ZFS scrub. If you click the entry for a device, buttons will appear to edit the device's options (shown in Figure 6.3n), offline the device, or replace the device.

Figure 6.3n: Volume Status

Status1d.png

If you click a disk in Volume Status and click its “Edit Disk” button, you will see the screen shown in Figure 6.3o:

Figure 6.3o: Editing a Disk

Volume6c.jpeg

Table 6.3f summarizes the configurable options:

Table 6.3f: Disk Options

Setting Value Description
Name string read-only value showing FreeBSD device name for disk
Serial string read-only value showing the disk's serial number
Description string optional
HDD Standby drop-down menu indicates the time of inactivity (in minutes) before the drive enters standby mode in order to conserve energy; this forum post demonstrates how to determine if a drive has spun down
Advanced Power Management drop-down menu default is Disabled, can select a power management profile from the menu
Acoustic Level drop-down menu default is Disabled, can be modified for disks that understand [AAM]
Enable [S.M.A.R.T] checkbox enabled by default if the disk supports S.M.A.R.T.; unchecking this box will disable any configured S.M.A.R.T. Tests for the disk
S.M.A.R.T. extra options string smartctl(8) options

NOTE: versions of FreeNAS® prior to 8.3.1 required a reboot in order to apply changes to the HDD Standby, Advanced Power Management, and Acoustic Level settings. As of 8.3.1, changes to these settings are applied immediately.

A ZFS dataset only has five icons as the scrub volume, create ZFS volume, and volume status buttons only apply to volumes. In a dataset, the Detach Volume button is replaced with the Destroy Dataset button. If you click the Destroy Dataset button, the browser will turn red to indicate that this is a destructive action. The pop-up warning message will warn that destroying the dataset will delete all of the files and snapshots of that dataset.

Key Management for Encrypted Volumes

If you check the "Enable full disk encryption" box during the creation of a ZFS volume, five encryption icons will be added to the icons that are typically seen when viewing a volume. An example is seen in Figure 6.3p.

Figure 6.3p: Encryption Icons Associated with an Encrypted ZFS Volume

Encryption1c.png

These icons are used to:

Create/Change Passphrase: click this icon to set and confirm the passphrase associated with the GELI encryption key. Remember this passphrase as you can not re-import an encrypted volume without it. In other words, if you forget the passphrase it is possible for the data on the volume to become inaccessible. An example would be a failed USB stick that requires a new installation on a new USB stick and a re-import of the existing pool, or the physical removal of disks when moving from an older hardware system to a new system. Protect this passphrase as anyone who knows it could re-import your encrypted volume, thus thwarting the reason for encrypting the disks in the first place.

When you click this icon, a red warning is displayed: Remember to add a new recovery key as this action invalidates the previous recovery key. Setting a passphrase invalidates the existing key. Once you set the passphrase, immediately click the Add recovery key button to create a new recovery key. Once the passphrase is set, the name of this icon will change to Change Passphrase.

Download Key: click this icon to download a backup copy of the GELI encryption key. Since the GELI encryption key is separate from the FreeNAS® configuration database, it is highly recommended to make a backup of the key. If the key is every lost or destroyed and there is no backup key, the data on the disks is inaccessible.

Encryption Re-key: generates a new GELI encryption key. Typically this is only performed when the administrator suspects that the current key may be compromised. This action also removes the current passphrase.

Add recovery key: generates a new recovery key and prompts for a location to download a backup copy of the recovery key. This recovery key can be used if the passphrase is forgotten. Always immediately add a recovery key whenever the passphrase is changed.

Remove recover key: Typically this is only performed when the administrator suspects that the current recovery key may be compromised. Immediately create a new passphrase and recovery key.

Each of these icons will prompt for the password used to access the FreeNAS® administrative GUI.

Setting Permissions

Setting permissions is an important aspect of configuring volumes. The graphical administrative interface is meant to set the initial permissions for a volume or dataset in order to make it available as a share. Once a share is available, the client operating system should be used to fine-tune the permissions of the files and directories that are created by the client.

The sections in Sharing contain configuration examples for several types of permission scenarios. This section provides an overview of the screen that is used to set permissions.

Once a volume or dataset is created, it will be listed by its mount point name in Storage → Volumes → View Volumes. If you click the Change Permissions icon for a specific volume/dataset, you will see the screen shown in Figure 6.3q. Table 6.3g summarizes the options in this screen.

Figure 6.3q: Changing Permissions on a Volume or Dataset

Volume2a.png

Table 6.3g: Options When Changing Permissions

Setting Value Description
Owner (user) drop-down menu user to control the volume/dataset; users which were manually created or imported from Active Directory or LDAP will appear in drop-down menu
Owner (group) drop-down menu group to control the volume/dataset; groups which were manually created or imported from Active Directory or LDAP will appear in drop-down
Mode checkboxes check the desired Unix permissions for user, group, and other
Type of ACL bullet selection Unix and Windows ACLs are mutually exclusive, this means that you must select the correct type of ACL to match the share; see the descriptions below this table for more details
Set permission recursively checkbox if checked, permissions will also apply to subdirectories of the volume or dataset; if data already exists on the volume/dataset, it is recommended to instead change the permissions recursively on the client side to prevent a performance lag on the FreeNAS® system

When in doubt, or if you have a mix of operating systems in your network, select Unix ACLs as all clients understand them. Windows ACLs are appropriate when the network contains only Windows clients and are the preferred option within an Active Directory domain. Windows ACLs add a superset of permissions that augment those provided by Unix ACLs. While Windows clients also understand Unix ACLs, they won't benefit from the extra permissions provided by Active Directory and Windows ACLs when Unix ACLs are used.

If you change your mind about the type of ACL, you do not have to recreate the volume. That is, existing data is not lost if the type of ACL is changed. However, if you change from Windows ACLs to Unix ACLs, the extended permissions provided by Windows ACLs will be removed from the existing files.

When you select Windows ACLs, the Mode will become greyed out as it only applies to Unix permissions. The default Windows ACLs are always set to what Windows sets on new files and directories by default. The Windows client should then be used to fine-tune the permissions as required.

Viewing Multipaths

FreeNAS® uses gmultipath(8) to provide multipath I/O support on systems containing hardware that is capable of multipath. An example would be a dual SAS expander backplane in the chassis or an external JBOD.

Multipath hardware adds fault tolerance to a NAS as the data is still available even if one disk I/O path has a failure.

FreeNAS® automatically detects active/active and active/passive multipath-capable hardware. Any multipath-capable devices that are detected will be placed in multipath units with the parent devices hidden. The configuration will be displayed in Storage → Volumes → View Multipaths, as seen in the example in Figure 6.3r. Note that this option will not be displayed in the Storage → Volumes tree on systems that do not contain multipath-capable hardware.

Figure 6.3r: Viewing Multipaths

Multipath.png

Figure 6.3r provides an example of a system with a SAS ZIL and a SAS hard drive. The ZIL device is capable of active/active writes, whereas the hard drive is capable of active/read.

Replacing a Failed Drive

If you are using any form of redundant RAID, you should replace a failed drive as soon as possible to repair the degraded state of the RAID. Depending upon the capability of your hardware, you may or may not need to reboot in order to replace the failed drive. AHCI capable hardware does not require a reboot.

NOTE: a stripe (RAID0) does not provide redundancy. If you lose a disk in a stripe, you will need to recreate the volume and restore the data from backup.

Before physically removing the failed device, go to Storage → Volumes → View Volumes → Volume Status and locate the failed disk. Once you have located the failed device in the GUI, perform the following steps:

1. If the disk is formatted with ZFS, click the disk's entry then its "Offline" button in order to change that disk's status to OFFLINE. This step is needed to properly remove the device from the ZFS pool and to prevent swap issues. If your hardware supports hot-pluggable disks, click the disk's "Offline" button, pull the disk, then skip to step 3. If there is no "Offline" button but only a "Replace" button, then the disk is already offlined and you can safely skip this step.

NOTE: if the process of changing the disk's status to OFFLINE fails with a "disk offline failed - no valid replicas" message, you will need to scrub the ZFS volume first using its Scrub Volume button in Storage → Volumes → View Volumes. Once the scrub completes, try to Offline the disk again before proceeding.

2. If the hardware is not AHCI capable, shutdown the system in order to physically replace the disk. When finished, return to the GUI and locate the OFFLINE disk.

3. Once the disk is showing as OFFLINE, click the disk again and then click its “Replace” button. Select the replacement disk from the drop-down menu and click the “Replace Disk” button. If the disk is a member of an encrypted ZFS pool, you will be prompted to input the passphrase for the pool. Once you click the “Replace Disk” button, the ZFS pool will start to resilver. You can use the zpool status command in Shell to monitor the status of the resilvering.

4. If the replaced disk continues to be listed after resilvering is complete, click its entry and use the Detach button to remove the disk from the list.

In the example shown in Figure 6.3s, a failed disk is being replaced by disk ada2 in the volume named volume1.

Figure 6.3s: Replacing a Failed Disk

Replace1f.png

Replacing a Failed Drive in an Encrypted Pool

If the ZFS pool is encrypted, additional steps are needed when replacing a failed drive.

First, make sure that a passphrase has been set before attempting to replace the failed drive. Then, follow the steps 1 and 2 as described above. During step 3, you will be prompted to input the passphrase for the pool. Wait until the resilvering is complete.

Next, restore the encryption keys to the pool. If the following additional steps are not performed before the next reboot, you may lose access to the pool permanently.

1. Highlight the pool that contains the disk you just replaced and click the "Encryption Re-key" button in the GUI. You will need to enter the root password.

2. Highlight the pool that contains the disk you just replaced and click the "Create Passphrase" button and enter the new passphrase. You can reuse the old passphrase if desired.

3. Highlight the pool that contains the disk you just replaced and click the "Download Key" button in order to save the new encryption key. Since the old key will no longer function, any old keys can be safely discarded.

4. Highlight the pool that contains the disk you just replaced and click the "Add Recovery Key" button in order to save the new recovery key. The old recovery key will no longer function, so it can be safely discarded.

Removing a Log or Cache Device

If you have added any log or cache devices, these devices will also appear in Storage → Volumes → View Volumes → Volume Status. If you click the device, you can either use its "Replace" button to replace the device as described above, or click its "Remove" button to remove the device.

Before performing either of these operations, verify the version of ZFS running on the system by running zpool upgrade -v|more from Shell.

If the pool is running ZFSv15, and a non-mirrored log device fails, is replaced, or removed, the pool is unrecoverable and the pool must be recreated and the data restored from a backup. For other ZFS versions, removing or replacing the log device will lose any data in the device which had not yet been written. This is typically the last few seconds of writes.

Removing or replacing a cache device will not result in any data loss, but may have an impact on read performance until the device is replaced.

Replacing Drives to Grow a ZFS Pool

The recommended method for expanding the size of a ZFS pool is to pre-plan the number of disks in a vdev and to stripe additional vdevs using the ZFS Volume Manager as additional capacity is needed.

However, this is not an option if you do not have open drive ports or the ability to add a SAS/SATA HBA card. In this case, you can replace one disk at a time with a larger disk, wait for the resilvering process to incorporate the new disk into the pool completes, then repeat with another disk until all of the disks have been replaced. This process is slow and places the system in a degraded state. Since a failure at this point could be disastrous, do not attempt this method unless the system has a reliable backup.

NOTE: this method requires the ZFS property autoexpand. This property became available starting with FreeNAS® version 8.2.0. If you are running an earlier version of FreeNAS®, upgrade before attempting this method.

Check and verify that the autoexpand property is enabled before attempting to grow the pool. If it is not, the pool will not recognize that the disk capacity has increased. By default, this property is enabled in FreeNAS® version 8.3.1. To verify the property, use Shell. This example checks the ZFS volume named Vol1:

zpool get all Vol1
NAME  PROPERTY       VALUE                  SOURCE
Vol1  size           4.53T                  -
Vol1  capacity       31%                    -
Vol1  altroot        /mnt                   local
Vol1  health         ONLINE                 -
Vol1  guid           8068631824452460057    default
Vol1  version        28                     default
Vol1  bootfs         -                      default
Vol1  delegation     on                     default
Vol1  autoreplace    off                    default
Vol1  cachefile      /data/zfs/zpool.cache  local
Vol1  failmode       wait                   default
Vol1  listsnapshots  off                    default
Vol1  autoexpand     on                     local
Vol1  dedupditto     0                      default
Vol1  dedupratio     1.00x                  -
Vol1  free           3.12T                  -
Vol1  allocated      1.41T                  -
Vol1  readonly       off                    -
Vol1  comment        -                      default

If autoexpansion is not enabled, enable it by specifying the name of the ZFS volume:

zpool set autoexpand=on Vol1

Verify that autoexpand is now enabled by repeating zpool get all Vol1.

You are now ready to replace one drive with a larger drive using the instructions in Replacing a Failed Drive.

Replace one drive at a time and wait for the resilver process to complete on the replaced drive before replacing the next drive. Once all the drives are replaced and the resilver completes, you should see the added space in the pool.

You can view the status of the resilver process by running zpool status Vol1.

Enabling ZFS Pool Expansion After Drive Replacement

It is recommended to enable the autoexpand property before you start replacing drives. If the property is not enabled before replacing some or all of the drives, extra configuration is needed to inform ZFS of the expanded capacity.

Verify that autoexpand is set as described in the previous section. Then, bring each of the drives back online with the following command, replacing the volume name and GPT ID for each disk in the ZFS pool:

zpool online -e Vol1 gptid/xxx

Online one drive at a time and check the status using the following example. If a drive starts to resilver, you need to wait for the resilver to complete before proceeding to online the next drive.

To find the GPT ID information for the drives, use glabel status or zpool status [Pool_Name] which will also show you if any drives are failed or in the process of being resilvered:

zpool status Vol1
  pool: Vol1
  state: ONLINE
  scan: scrub repaired 0 in 16h24m with 0 errors on Sun Mar 10 17:24:20 2013
  config:
	NAME                                            STATE     READ WRITE CKSUM
	Vol1                                            ONLINE       0     0     0
	  raidz1-0                                      ONLINE       0     0     0
 	    gptid/d5ed48a4-634a-11e2-963c-00e081740bfe  ONLINE       0     0     0
 	    gptid/03121538-62d9-11e2-99bd-00e081740bfe  ONLINE       0     0     0
	    gptid/252754e1-6266-11e2-8088-00e081740bfe  ONLINE       0     0     0
	    gptid/9092045a-601d-11e2-892e-00e081740bfe  ONLINE       0     0     0
  	    gptid/670e35bc-5f9a-11e2-92ca-00e081740bfe  ONLINE       0     0     0
 
  errors: No known data errors

After onlining all of the disks, type zpool status to see if the drives start to resilver. If this happens, wait for the resilvering process to complete.

Next, export and then import the pool:

zpool export Vol1
zpool import -R /mnt Vol1

Once the import completes, all of the drive space should be available. Verify that the increased size is recognized:

zpool list Vol1
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
Vol1  9.06T  1.41T  7.24T    31%  1.00x  ONLINE  /mnt

If you cannot see the extra space, you may need to run zpool online -e <pool> <device> for every device listed in zpool status.

Splitting a Mirrored ZFS Storage Pool

ZFSv28 provides the ability to to split a mirrored storage pool, which detaches a disk or disks in the original ZFS volume in order to create another identical ZFS volume on another system.

NOTE: zpool split only works on mirrored ZFS volumes.

In this example, a ZFS mirror named test contains three drives:

zpool status
  pool: test
state: ONLINE
scan: resilvered 568K in 0h0m with 0 errors on Wed Jul  6 16:10:58 2011
config:
   NAME        STATE     READ WRITE CKSUM
   test        ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       da1     ONLINE       0     0     0
       da0     ONLINE       0     0     0
       da4     ONLINE       0     0     0

The following command splits from the existing three disk mirror test a new ZFS volume named migrant containing one disk, da4. Disks da0 and da1 remain in test.

zpool split test migrant da4

At this point, da4 can be physically removed and installed to a new system as the new pool is exported as it is created. Once physically installed, import the identical pool on the new system:

zpool import migrant

This makes the ZFS volume migrant available with a single disk. Be aware that properties come along with the clone, so the new pool will be mounted where the old pool was mounted if the mountpoint property was set on the original pool.

Verify the status of the new pool:

zpool status
  pool: migrant
state: ONLINE
scan: resilvered 568K in 0h0m with 0 errors on Wed Jul  6 16:10:58 2011
config:
   NAME        STATE     READ WRITE CKSUM
   migrant     ONLINE       0     0     0
     da4       ONLINE       0     0     0
errors: No known data errors

On the original system, the status now looks like this:

zpool status
  pool: test
state: ONLINE
scan: resilvered 568K in 0h0m with 0 errors on Wed Jul  6 16:10:58 2011
config:
   NAME        STATE     READ WRITE CKSUM
   test        ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       da1     ONLINE       0     0     0
       da0     ONLINE       0     0     0
errors: No known data errors

At this point, it is recommended to add disks to create a full mirror set. This example adds two disks named da2 and da3:

zpool attach migrant da4 da2
zpool attach migrant da4 da3

The migrant volume now looks like this:

zpool status
  pool: migrant
state: ONLINE
scan: resilvered 572K in 0h0m with 0 errors on Wed Jul  6 16:43:27 2011
config:
   NAME        STATE     READ WRITE CKSUM
   migrant     ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       da4     ONLINE       0     0     0
       da2     ONLINE       0     0     0
       da3     ONLINE       0     0     0

Now that the new system has been cloned, you can detach da4 and install it back to the original system. Before physically removing the disk, run this command on the new system:

zpool detach migrant da4

Once the disk is physically re-installed, run this command on the original system:

zpool attach orig da0 da4

Should you ever need to create a new clone, remember to remove the old clone first:

zpool destroy migrant
Personal tools
Navigation