Enabling RDMA
Virtuozzo Hybrid Infrastructure supports remote direct memory access (RDMA) over InfiniBand (IB) or Converged Ethernet (RoCE) for the storage backend network. The RDMA technology allows servers in this network to exchange data in main memory without involving their processors, cache or operating systems, thus freeing up resources and improving throughput and performance.
By default, RDMA is disabled. To configure it automatically, you can enable it in the admin panel or by using the vinfra
tool. However, this is only possible before the storage cluster is created. Before enabling the feature, check the RDMA network first.
Limitations
-
We recommend using NVIDIA Mellanox ConnectX-5 adapters for the RDMA mode. If you want to use other adapters in the RDMA mode, contact the technical support team for recommendations.
- To be used for the RDMA traffic, a network bond can only be configured across different interfaces of the same NIC.
-
RDMA is not supported for the compute service. Therefore, the compute and storage networks must be physically separated on different NICs. If you use the recommended approach with bonded network interfaces, you should have one network card with two bonded network interfaces for the storage network and one network card with two bonded network interfaces for the compute network. To learn how to use a compute trunk network, refer to Connecting virtual switches to trunk interfaces.
- Enabling or disabling RDMA may temporarily affect cluster availability.
Prerequisites
- Your RDMA network infrastructure must be ready before you install Virtuozzo Hybrid Infrastructure.
- Each network adapter connected to a network with the Storage traffic type supports RDMA.
- InfiniBand devices are configured on all of your nodes, as described in Configuring InfiniBand devices.