Compute cluster network requirements

General requirements and recommendations are listed in Network requirements and Network recommendations.

You can create a minimum network configuration for evaluation purposes, or expand it to an advanced network configuration, which is recommended for production. Both these network configurations have the following requirements:

  • 10+ Gbit/s network adapters must be configured to use MTU 9000, to achieve full performance.
  • For pulling VM backups by third-party backup management systems, the VM backups traffic type must be assigned together with VM public on a separate isolated network due to security reasons.
  • For RDMA, a separate physical interface must be used for the Storage traffic type.

Minimum network configuration for the compute cluster

The minimum configuration includes two networks, for internal and external traffic:

Recommended network configuration for the compute cluster

The recommended configuration expands to five networks connected to the following logical network interfaces:

  • One private bonded connection with a single VLAN (or a native interface) for internal management and storage traffic with the Storage and Internal management traffic types.

  • One public bonded connection with at least three VLANs over it:

    • The trunk interface assigned the VM public traffic type, to automatically create VLAN-based networks for external (public) traffic of virtual machines.
    • One VLAN for overlay network traffic between VMs with the VM private traffic type.

      Starting from the version 5.2, we support data-in-transit encryption between nodes. Enabling encryption decreases the VXLAN payload by 37 bytes, thus increasing the default overhead for virtual networks from 50 to 87 bytes.

    • One VLAN for service delivery via the admin and self-service panels, compute API, and for management via SSH, with these traffic types: Compute API, Admin panel, Self-service panel, and SSH.

      This VLAN can also be used for public export of iSCSI, NFS, S3, Backup Gateway data, and accessing cluster monitoring statistics via the SNMP protocol.

    • One or more VLANs for external VM traffic with the VM public traffic type.

The table below includes the full list of network recommendations for the compute cluster:

Bond VLAN Network Traffic types MTU

Specifics

Bond0 101 or native Private Storage, Internal management1

To achieve the maximum performance, these networks must have the MTU size close to 9000 bytes.2

The bond must be built on top of a high-performance network, as storage traffic requires low latency and high throughput. We recommend using 25 or 40 Gbit/s network adapters. Using 10 Gbit/s adapters is also possible, but not recommended.

For this network, we also recommend using RDMA (over Infiniband or RoCEv2), as it significantly increases storage performance for IOPS-intensive workloads.

Bond1 Trunk Trunk VM public

The bond must be built on top of 10+ Gbit/s network, as it includes internal traffic between virtual machines in private virtual networks (VXLAN).

102 Overlay VM private Includes the 87-byte overhead due to VXLAN (50 bytes) and encryption (37 bytes).
103 Services Compute API, Admin panel, Self-service panel, SSH3

The self-service panel traffic should be exposed to public networks via NAT.

Furthermore, we do not recommend exposing such services as the admin panel and SSH to the Internet. For managing your cluster, use a secure VPN. If you need to provide end users access to the OpenStack API, expose the compute API traffic via NAT and configure the OpenStack endpoints.

104 Public VM public