Scaling the storage cluster
After deploying the storage cluster, you can expand its storage capacity at any time by adding more storage disks (vertical scaling) or increasing the number of storage nodes (horizontal scaling). You can also replace storage disks with disks of larger size by following the instructions in Replacing node disks.
To better understand the difference between vertical and horizontal scaling, let’s have a look at the following scenarios:
- Vertical scaling. The cluster has five nodes with 12 hard drive slots each. One disk is used for system and metadata, and 9 disks are used for storage on tier 0. Backup storage is deployed on top of the storage cluster with the 3+2 encoding mode. You can expand the storage capacity of the backup storage by adding two more disks to each node. As a result, the storage capacity will increase by 2/9.
- Horizontal scaling. The cluster has five nodes with 12 hard drive slots each. One disk is used for system and metadata, and 11 disks are used for storage on tier 0. Backup storage is deployed on top of the storage cluster with the 3+2 encoding mode. You can expand the storage capacity and throughput of the backup storage by adding two more nodes of the same size (that is, with 12 disks). As a result, the storage capacity will increase by 2/5. Additionally, to maximize the storage efficiency, you can update the encoding mode to 5+2, as described in Changing the redundancy scheme for backup storage.
Before you add new disks and nodes, consider the following recommendations for their sizing:
- It is recommended for a storage tier to have an equal number of disks per node. Then, the data will be spread more evenly among nodes. For more information, refer to Logical space chart.
- Having the same-size disks helps distribute the loads more evenly. Inside a cluster, the disk usage is proportional to the disk size. For example, if you have a disk of 10 ТB and a disk of 2 TB, a 50% cluster load will use 5 ТB and 1 TB, respectively.
Limitations
- You can assign a role to a disk only if its size is greater than 1 GiB.
- You can assign an additional role to a system disk only if its size is at least 100 GiB.
- It is recommended to assign the System and Metadata roles to either an SSD disk or different HDDs. Assigning both of these roles to the same HDD disk will result in mediocre performance suitable only for cold data (for example, archiving).
- The System role cannot be combined with the Cache and Metadata+Cache roles. The reason is that the I/O generated by the operating system and applications would contend with the I/O generated by journaling, thus negating its performance benefits.
- You can use shingled magnetic recording (SMR) HDDs only with the Storage role and only if the node has an SSD disk with the Cache role.
- You cannot use SMR and standard disks in the same tier.
- You cannot assign roles to system and non-system disks at a time.
Prerequisites
- The storage cluster is created, as described in Deploying the storage cluster.
To add disks to the storage cluster
Admin panel
- On the Infrastructure > Nodes screen, click the name of the node.
- On the Disks tab, click the new disk without a role.
- On the disk right pane, click Assign role.
-
In the Assign role window, select a disk role, that is how you want to use the disk:
-
[Only for SSD drives] To store write cache
- Select the Cache role.
- Select a storage tier that you want to cache.
For storage disks to use cache, the Cache role must be assigned before the Storage role. You can also assign both of these roles to disks at the same time, and the system will configure the cache disk first.
-
To store data
- Select the Storage role.
- Select a storage tier where to store your data. To make better use of data redundancy, do not assign all of the disks on a node to the same tier. Instead, make sure that each tier is evenly distributed across the cluster.
-
Enable data caching and checksumming:
- Enable SSD caching and checksumming. Available and recommended only for nodes with SSDs.
- Enable checksumming (default). Recommended for nodes with HDDs as it provides better reliability.
- Disable checksumming. Not recommended for production. For an evaluation or testing environment, you can disable checksumming for nodes with HDDs, to provide better performance.
-
To store cluster metadata
Select the Metadata role.
It is recommended to have only one disk with the Metadata role per node and maximum five such disks in a cluster.
-
[Only for SSD drives] To store both metadata and write cache
- Select the Metadata+Cache role.
- Select a storage tier that you want to cache.
-
- Click Assign.
Command-line interface
Use the following command:
vinfra node disk assign --disk <disk>:<role>[:<key=value,…>] [--node <node>]
--disk <disk>:<role> [:<key=value,…>]
-
Disk configuration in the format:
<disk>
: disk device ID or name<role>
: disk role (cs
,mds
,journal
,mds-journal
,mds-system
,cs-system
,system
)- comma-separated
key=value
pairs with keys (optional):tier
: disk tier (0, 1, 2 or 3)journal-tier
: journal (cache) disk tier (0, 1, 2 or 3)journal-type
: journal (cache) disk type (no_cache
,inner_cache
orexternal_cache
)journal-disk
: journal (cache) disk ID or device namebind-address
: bind IP address for the metadata service
Example:
sda:cs:tier=0,journal-type=inner_cache
.
This option can be used multiple times. --node <node>
- Node ID or hostname (default:
node001.vstoragedomain
)
For example, to assign the role cs
to the disk sdc
on the node node003
, run:
# vinfra node disk assign --disk sdc:cs --node node003
You can view the node's disk configuration in the vinfra node disk list
output:
# vinfra node disk list --node node003 +--------------------------------------+--------+------+------------+-------------+---------+----------+---------------+------------+----------------+ | id | device | type | role | disk_status | used | size | physical_size | service_id | service_status | +--------------------------------------+--------+------+------------+-------------+---------+----------+---------------+------------+----------------+ | 2A006CA5-732F-4E17-8FB0-B82CE0F28DB2 | sdc | hdd | cs | ok | 11.2GiB | 125.8GiB | 128.0GiB | 1026 | active | | 642A7162-B66C-4550-9FB2-F06866FB7EA1 | sdb | hdd | cs | ok | 8.7GiB | 125.8GiB | 128.0GiB | 1025 | active | | 45D38CD2-3B94-4F0F-8864-9D51F716D3B1 | sda | hdd | mds-system | ok | 21.0GiB | 125.9GiB | 128.0GiB | 1 | avail | +--------------------------------------+--------+------+------------+-------------+---------+----------+---------------+------------+----------------+
To add nodes to the storage cluster
Admin panel
- On the Infrastructure > Nodes screen, click an unassigned node.
- On the node right pane, click Join to cluster.
-
In the Join node to storage cluster window, check the default disk configuration. If it is correct, proceed to join the node to the storage cluster.
Also, you can assign roles to your disks manually or use Disk actions to work with the disks. Alternatively, you can copy the disk configuration from another node by clicking Copy configuration from and selecting the desired node.
- Once you finish configuring the disks, click Join, to add the node to the storage cluster.
Command-line interface
Use the following command:
vinfra node join [--disk <disk>:<role>[:<key=value,…>]] <node>
--disk <disk>:<role> [:<key=value,…>]
-
Disk configuration in the format:
<disk>
: disk device ID or name<role>
: disk role (cs
,mds
,journal
,mds-journal
,mds-system
,cs-system
,system
)- comma-separated
key=value
pairs with keys (optional):tier
: disk tier (0, 1, 2 or 3)journal-tier
: journal (cache) disk tier (0, 1, 2 or 3)journal-type
: journal (cache) disk type (no_cache
,inner_cache
orexternal_cache
)journal-disk
: journal (cache) disk ID or device namebind-address
: bind IP address for the metadata service
Example:
sda:cs:tier=0,journal-type=inner_cache
.
This option can be used multiple times. <node>
- Node ID or hostname
For example, to add the node node002
to the storage cluster and assign roles to disks: mds-system
to sda
, cs
to sdb
and sdc
, run:
# vinfra node join f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 --disk sda:mds-system \ --disk sdb:cs --disk sdc:cs
The added node will appear in the vinfra node list
output:
# vinfra node list +--------------+--------------+------------+-----------+-------------+----------+ | id | host | is_primary | is_online | is_assigned | is_in_ha | +--------------+--------------+------------+-----------+-------------+----------+ | 09bb6b8<...> | node001<...> | True | True | True | False | | 187edb1<...> | node002<...> | False | True | True | False | +--------------+--------------+------------+-----------+-------------+----------+