Deploying the storage cluster
Create the storage cluster on one (the first) node, then populate it with more nodes.
Limitations
- You can assign a role to a disk only if its size is greater than 1 GiB.
- You can assign an additional role to a system disk only if its size is at least 100 GiB.
- It is recommended to assign the System and Metadata roles to either an SSD disk or different HDDs. Assigning both of these roles to the same HDD disk will result in mediocre performance suitable only for cold data (for example, archiving).
- The System role cannot be combined with the Cache and Metadata+Cache roles. The reason is that the I/O generated by the operating system and applications would contend with the I/O generated by journaling, thus negating its performance benefits.
- You can use shingled magnetic recording (SMR) HDDs only with the Storage role and only if the node has an SSD disk with the Cache role. Host-managed SMR disks are not supported.
- You cannot use SMR and standard disks in the same tier.
- You cannot assign roles to system and non-system disks at a time.
Prerequisites
- A clear understanding of the storage cluster architecture and disk roles, which are explained in About the storage cluster.
- A clear understanding of the concept Storage tiers.
- Your infrastructure networks are set up, as described in Setting up networks.
- The node network interfaces are configured by following the instructions in Configuring node network interfaces.
- If supported, RDMA is enabled, as described in Enabling RDMA.
- If your infrastructure nodes are equipped with NVMe or SSD disks, it is recommended to enable NVMe performance, as described in Configuring NVMe performance.
- External DNS servers are added automatically during the installation or manually, as described in Adding external DNS servers.
- Locations for your nodes are configured, as explained in Configuring node locations.
- All of the nodes are shown in the admin panel on the Infrastructure > Nodes screen with the Unassigned status.
To create the storage cluster on the first node
Admin panel
- Open the Infrastructure > Nodes screen, and then click Create storage cluster.
- In the Create storage cluster window, enter a name for the cluster. The cluster name may only contain Latin letters (a-z, A-Z), numbers (0-9), and hyphens ("-"). It must start with a letter and end with a letter or number.
-
Enable disk encryption for tiers. You can also enable it later.
-
Select one node to create the storage cluster from, and then click Next.
-
In the next window, check the default disk configuration. If it is correct, proceed to create the storage cluster.
Also, you can assign roles to your disks manually or use Disk actions to work with the disks.
-
To assign roles to disks manually, do the following:
-
[Only for SSD drives] To store write cache
- Select the Cache role.
- Select a storage tier that you want to cache.
For storage disks to use cache, the Cache role must be assigned before the Storage role. You can also assign both of these roles to disks at the same time, and the system will configure the cache disk first.
-
To store data
- Select the Storage role.
- Select a storage tier where to store your data. To make better use of data redundancy, do not assign all of the disks on a node to the same tier. Instead, make sure that each tier is evenly distributed across the cluster.
-
Enable data caching and checksumming:
- Enable SSD caching and checksumming. Available and recommended only for nodes with SSDs.
- Enable checksumming (default). Recommended for nodes with HDDs as it provides better reliability.
- Disable checksumming. Not recommended for production. For an evaluation or testing environment, you can disable checksumming for nodes with HDDs, to provide better performance.
-
To store cluster metadata
Select the Metadata role.
It is recommended to have only one disk with the Metadata role per node and maximum five such disks in a cluster.
-
[Only for SSD drives] To store both metadata and write cache
- Select the Metadata+Cache role.
- Select a storage tier that you want to cache.
-
- To assign roles to disks automatically, click Disk actions > Configure automatically.
- To assign a role to multiple disks at a time, click Disk actions > Bulk disk management, select disks, and then click Assign role. Choose the desired role for the selected disks, and then click Assign.
- To reset the disk configuration, click Disk actions > Clear configuration.
-
- Once you finish configuring the disks, click Create, to create the storage cluster.
You can monitor cluster creation on the Infrastructure > Nodes screen. The creation might take some time, depending on the number of disks to be configured. Once the configuration is complete, the cluster is created.
Command-line interface
Use the following command:
vinfra cluster create [--disk <disk>:<role>[:<key=value,…>]] [--tier-encryption {0,1,2,3}] --node <node> <cluster-name>
--disk <disk>:<role> [:<key=value,…>]
-
Disk configuration in the format:
<disk>
: disk device ID or name<role>
: disk role (cs
,mds
,journal
,mds-journal
,mds-system
,cs-system
,system
)- comma-separated
key=value
pairs with keys (optional):tier
: disk tier (0, 1, 2 or 3)journal-tier
: journal (cache) disk tier (0, 1, 2 or 3)journal-type
: journal (cache) disk type (no_cache
,inner_cache
orexternal_cache
)journal-disk
: journal (cache) disk ID or device namebind-address
: bind IP address for the metadata service
Example:
sda:cs:tier=0,journal-type=inner_cache
.
This option can be used multiple times. --tier-encryption {0,1,2,3}
- Enable encryption for storage cluster tiers. Encryption is disabled by default. This option can be used multiple times.
--node <node>
- Node ID or hostname
<cluster-name>
- Storage cluster name
For example, to create the storage cluster stor1
on the node node001
, run:
# vinfra cluster create stor1 --node node001
As disk roles are not explicitly specified, they are assigned automatically: mds-system
to the system disk, and cs
to all other disks.
You can view the storage cluster details in the vinfra cluster show
output:
# vinfra cluster show +-------+--------------------------------------------+ | Field | Value | +-------+--------------------------------------------+ | id | 1 | | name | stor1 | | nodes | - host: node001.vstoragedomain | | | id: f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 | | | is_installing: false | | | is_releasing: false | +-------+--------------------------------------------+
To add nodes to the cluster
Admin panel
- On the Infrastructure > Nodes screen, click an unassigned node.
- On the node right pane, click Join to cluster.
-
In the Join node to storage cluster window, check the default disk configuration. If it is correct, proceed to join the node to the storage cluster.
Also, you can assign roles to your disks manually or use Disk actions to work with the disks. Alternatively, you can copy the disk configuration from another node by clicking Copy configuration from and selecting the desired node.
- Once you finish configuring the disks, click Join, to add the node to the storage cluster.
Command-line interface
Use the following command:
vinfra node join [--disk <disk>:<role>[:<key=value,…>]] <node>
--disk <disk>:<role> [:<key=value,…>]
-
Disk configuration in the format:
<disk>
: disk device ID or name<role>
: disk role (cs
,mds
,journal
,mds-journal
,mds-system
,cs-system
,system
)- comma-separated
key=value
pairs with keys (optional):tier
: disk tier (0, 1, 2 or 3)journal-tier
: journal (cache) disk tier (0, 1, 2 or 3)journal-type
: journal (cache) disk type (no_cache
,inner_cache
orexternal_cache
)journal-disk
: journal (cache) disk ID or device namebind-address
: bind IP address for the metadata service
Example:
sda:cs:tier=0,journal-type=inner_cache
.
This option can be used multiple times. <node>
- Node ID or hostname
For example, to add the node node002
to the storage cluster and assign roles to disks: mds-system
to sda
, cs
to sdb
and sdc
, run:
# vinfra node join f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 --disk sda:mds-system \ --disk sdb:cs --disk sdc:cs
The added node will appear in the vinfra node list
output:
# vinfra node list +--------------+--------------+------------+-----------+-------------+----------+ | id | host | is_primary | is_online | is_assigned | is_in_ha | +--------------+--------------+------------+-----------+-------------+----------+ | 09bb6b8<...> | node001<...> | True | True | True | False | | 187edb1<...> | node002<...> | False | True | True | False | +--------------+--------------+------------+-----------+-------------+----------+