![]() All metadata and data is checksummed, and ZFS automatically repairs bad data from a good copy when corruption is detected. ZFS supports a rich set of mechanisms for handling device failure and data corruption. # zpool create mypool mirror sda sdb mirror sdc sdd For example, the following creates two root vdevs, each a mirror of two disks: The keywords "mirror" and "raidz" are used to distinguish where a group ends and another begins. Virtual devices are specified one at a time on the command line, separated by whitespace. As new virtual devices are added, ZFS automatically places data on the newly available devices. Data is dynamically distributed across all top-level devices to balance data among devices. Mirrors of mirrors (or other combinations) are not allowed.Ī pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. For more information, see the "Cache Devices" section. A cache device cannot be configured as a mirror or raidz group. For more information, see the "Intent Log" section.Ī device used to cache storage pool data. However, raidz vdev types are not supported for the intent log. If more than one log device is specified, then writes are load-balanced between devices. For more information, see the "Hot Spares" section.Ī separate-intent log device. The recommended number is between 3 and 9 to help increase performance.Ī special pseudo- vdev which keeps track of available hot spares for a pool. The minimum number of devices in a raidz group is one more than the number of parity disks. The raidz vdev type is an alias for raidz1.Ī raidz group with N disks of size X with P parity disks can hold approximately ( N-P)* X bytes and can withstand P device(s) failing before data integrity is compromised. ![]() The raidz1 vdev type specifies a single-parity raidz group the raidz2 vdev type specifies a double-parity raidz group and the raidz3 vdev type specifies a triple-parity raidz group. ![]() Data and parity is striped across all disks within a raidz group.Ī raidz group can have single-, double-, or triple parity, meaning that the raidz group can sustain one, two, or three failures, respectively, without losing any data. A mirror with N disks of size X can hold X bytes and can withstand ( N-1) devices failing before data integrity is compromised.Ī variation on RAID-5 that allows for better distribution of parity and eliminates the " RAID-5 write hole" (in which data and parity become inconsistent after a power loss). Data is replicated in an identical fashion across all components of a mirror. A file must be specified by a full path.Ī mirror of two or more devices. It is designed primarily for experimental purposes, as the fault tolerance of a file is only as good as the file system of which it is a part. The use of files as a backing store is strongly discouraged. When given a whole disk, ZFS automatically labels the disk, if necessary.Ī regular file. A whole disk can be specified by omitting the partition designation. For example, "sda" is equivalent to "/dev/sda". A disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under "/dev"). ZFS can use individual partitions, though the recommended mode of operation is to use whole disks. The following virtual devices are supported:Ī block device, typically located under /dev. See zfs(8) for information on managing datasets.Ī "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets.Īll datasets within a storage pool share the same space. The zpool command configures ZFS storage pools. Section: System Administration Commands (8)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |