Quantity of disks per node
Each management node must have at least two disks (one for system and metadata, one for storage). Each secondary node must have at least two disks (one for system, one for storage). It is recommended to have at least three but not more than five metadata disks in a cluster.
The more disks per node, the lower the CAPEX. As an example, a cluster created from ten nodes with two disks in each will be less expensive than a cluster created from twenty nodes with one disk in each.
In general, a cluster with many nodes and few disks per node offers higher performance, while a cluster with the minimum number of nodes (3) and a lot of disks per node is cheaper. Refer to the following table for more details.
Design considerations | Minimum nodes (3), many disks per node | Many nodes, few disks per node (all-flash configuration) |
---|---|---|
Optimization | Lower cost. | Higher performance. |
Free disk space to reserve | More space to reserve for cluster rebuilding, as fewer healthy nodes will have to store the data from a failed node. | Less space to reserve for cluster rebuilding, as more healthy nodes will have to store the data from a failed node. |
Redundancy | Fewer erasure coding choices. | More erasure coding choices. |
Cluster balance and rebuilding performance | Worse balance and slower rebuilding. | Better balance and faster rebuilding. |
Network capacity | More network bandwidth required to maintain cluster performance during rebuilding. | Less network bandwidth required to maintain cluster performance during rebuilding. |
Favorable data type | Cold data (for example, backups). | Hot data (for example, virtual environments). |
Sample server configuration | Supermicro SSG-6047R-E1R36L (Intel Xeon E5-2620 v1/v2 CPU, 32 GB RAM, 36 x 12 TB HDDs, a 500 GB system disk). | Supermicro SYS-2028TP-HC0R-SIOM (4 x Intel E5-2620 v4 CPUs, 4 x 16 GB RAM, 4 x 1.9 TB Samsung PM1643 SSDs). |
Take note of the following:
- These considerations only apply if the failure domain is the host.
- Virtuozzo Hybrid Infrastructure supports hundreds of disks per node. If you plan to use more than 36 disks per node, contact our sales engineers who will help you design a more efficient cluster.