Configuring new disks manually
Limitations
- You can assign a role to a disk only if its size is greater than 1 GiB.
- You can assign an additional role to a system disk only if its size is at least 100 GiB.
- You can use shingled magnetic recording (SMR) HDDs only with the Storage role and only if the node has an SSD disk with the Cache role.
- You cannot use SMR and standard disks in the same tier.
- You cannot assign roles to system and non-system disks at a time.
Prerequisites
- A clear understanding of the storage cluster architecture and disk roles, which are explained in About the storage cluster.
- The failed disk is released, as described in Releasing node disks, and the new disk for replacement is connected to the node.
To manually assign roles to a new disk
Admin panel
- On the Infrastructure > Nodes screen, click the name of the node.
- On the Disks tab, click the new disk without a role.
- On the disk right pane, click Assign role.
-
In the Assign role window, select a disk role, that is how you want to use the disk:
-
To store data
- Select the Storage role.
- Select a storage tier where to store your data. To make better use of data redundancy, do not assign all of the disks on a node to the same tier. Instead, make sure that each tier is evenly distributed across the cluster with only one disk per node assigned to it.
-
Enable data caching and checksumming:
- Enable SSD caching and checksumming. Available and recommended only for nodes with SSDs.
- Enable checksumming (default). Recommended for nodes with HDDs as it provides better reliability.
- Disable checksumming. Not recommended for production. For an evaluation or testing environment, you can disable checksumming for nodes with HDDs, to provide better performance.
-
To store cluster metadata
Select the Metadata role.
It is recommended to have only one metadata service per node and maximum five metadata services for a cluster.
-
[Only for SSD drives] To store write cache
- Select the Cache role.
- Select a storage tier that you want to cache.
-
[Only for SSD drives] To store both metadata and write cache
- Select the Metadata+Cache role.
- Select a storage tier that you want to cache.
-
- Click Assign.
Command-line interface
Use the following command:
vinfra node disk assign --disk <disk>:<role>[:<key=value,…>] [--node <node>]
--disk <disk>:<role> [:<key=value,…>]
-
Disk configuration in the format:
<disk>
: disk device ID or name<role>
: disk role (cs
,mds
,journal
,mds-journal
,mds-system
,cs-system
,system
)- comma-separated
key=value
pairs with keys (optional):tier
: disk tier (0, 1, 2 or 3)journal-tier
: journal (cache) disk tier (0, 1, 2 or 3)journal-type
: journal (cache) disk type (no_cache
,inner_cache
orexternal_cache
)journal-disk
: journal (cache) disk ID or device namebind-address
: bind IP address for the metadata service
Example:
sda:cs:tier=0,journal-type=inner_cache
.
This option can be used multiple times. --node <node>
- Node ID or hostname (default:
node001.vstoragedomain
)
For example, to assign the role cs
to the disk sdc
on the node node003
, run:
# vinfra node disk assign --disk sdc:cs --node node003
You can view the node's disk configuration in the vinfra node disk list
output:
# vinfra node disk list --node node003 +--------------------------------------+--------+------+------------+-------------+---------+----------+---------------+------------+----------------+ | id | device | type | role | disk_status | used | size | physical_size | service_id | service_status | +--------------------------------------+--------+------+------------+-------------+---------+----------+---------------+------------+----------------+ | 2A006CA5-732F-4E17-8FB0-B82CE0F28DB2 | sdc | hdd | cs | ok | 11.2GiB | 125.8GiB | 128.0GiB | 1026 | active | | 642A7162-B66C-4550-9FB2-F06866FB7EA1 | sdb | hdd | cs | ok | 8.7GiB | 125.8GiB | 128.0GiB | 1025 | active | | 45D38CD2-3B94-4F0F-8864-9D51F716D3B1 | sda | hdd | mds-system | ok | 21.0GiB | 125.9GiB | 128.0GiB | 1 | avail | +--------------------------------------+--------+------+------------+-------------+---------+----------+---------------+------------+----------------+