Network recommendations
Recommendations for network hardware
- Network latency dramatically reduces cluster performance. Use quality network equipment with low latency links. Do not use consumer-grade network switches.
- Do not use desktop network adapters like Intel EXPI9301CTBLK or Realtek 8129 as they are not designed for heavy load and may not support full-duplex links. Also use non-blocking Ethernet switches.
-
We recommend using NVIDIA Mellanox ConnectX-5 adapters for the RDMA mode. If you want to use other adapters in the RDMA mode, contact the technical support team for recommendations.
-
If you use NVIDIA Mellanox network adapters and AMD Epyc Rome CPU together on physical nodes, ensure that SR-IOV is properly enabled. Otherwise, this may lead to data loss and performance degradation.
To enable SR-IOV- Enable SR-IOV in BIOS.
-
Enable IOMMU on the node:
-
In the /etc/default/grub file, locate the
GRUB_CMDLINE_LINUX
line, and then add theiommu=pt
kernel parameter. The resulting line may look as follows:GRUB_CMDLINE_LINUX="crashkernel=auto tcache.enabled=0 quiet iommu=pt"
-
Regenerate the GRUB configuration file by running:
# grub2-mkconfig -o /boot/grub2/grub.cfg
The default location is different on a UEFI-based system.
-
- We do not recommend using the BNX2X driver for Broadcom-based network adapters, such as BCM57840 NetXtreme II 10/20-Gigabit Ethernet / HPE FlexFabric 10Gb 2-port 536FLB Adapter. This driver limits MTU to 3616, which affects the cluster performance. Ensure that the BNXT driver is used instead.
-
RDMA is not supported for the compute service. Therefore, the compute and storage networks must be physically separated on different NICs. If you use the recommended approach with bonded network interfaces, you should have one network card with two bonded network interfaces for the storage network and one network card with two bonded network interfaces for the compute network. To learn how to use a compute trunk network, refer to Connecting virtual switches to trunk interfaces.
Recommendations for network security
- Use separate networks (and, ideally albeit optionally, separate network adapters) for internal and public traffic. Doing so will prevent public traffic from affecting cluster I/O performance and also prevent possible denial-of-service attacks from the outside.
- To avoid intrusions, Virtuozzo Hybrid Infrastructure should be on a dedicated internal network inaccessible from outside.
- Even though cluster nodes have the necessary
iptables
rules configured, we recommend using an external firewall for untrusted public networks, such as the Internet.
Recommendations for network performance
- Use one 1 Gbit/s link per each two HDDs on the node (rounded up). For one or two HDDs on a node, two bonded network interfaces are still recommended for high network availability. The reason for this recommendation is that 1 Gbit/s Ethernet networks can deliver 110-120 MB/s of throughput, which is close to sequential I/O performance of a single disk. Since several disks on a server can deliver higher throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.
- For maximum sequential I/O performance, use one 1 Gbit/s link per each hard drive or one 10+ Gbit/s link per node. Even though I/O operations are most often random in real-life scenarios, sequential I/O is important in backup scenarios.
- For maximum overall performance, we recommend using 25 or 40 Gbit/s network adapters. Using 10 Gbit/s adapters is also possible, but not recommended.
- It is not recommended to configure 1 Gbit/s network adapters to use non-default MTUs (for example, 9000-byte jumbo frames). Such settings require additional configuration of switches and often lead to human error. 10+ Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance. You will need to configure the same MTU value on each router and switch on the network (refer to your network equipment manuals), as well as on each node’s network card, bond, or VLAN. The MTU value is set to 1500 by default.
Network recommendations for clients
The following table lists the maximum network performance a client can get with the specified network interface. The recommendation for clients is to use 10 Gbps network hardware between any two cluster nodes and minimize network latencies, especially if SSD disks are used.
Storage network interface | Node max. I/O | VM max. I/O (replication) | VM max. I/O (erasure coding) |
---|---|---|---|
1 Gbps | 100 MB/s | 100 MB/s | 70 MB/s |
2 x 1 Gbps | ~175 MB/s | 100 MB/s | ~130 MB/s |
3 x 1 Gbps | ~250 MB/s | 100 MB/s | ~180 MB/s |
10 Gbps | 1 GB/s | 1 GB/s | 700 MB/s |
2 x 10 Gbps | 1.75 GB/s | 1 GB/s | 1.3 GB/s |