6.3. Using 1 GbE and 10 GbE Networks¶
1 Gbit/s Ethernet networks can deliver 110-120 MB/s, which is close to a single drive performance on sequential I/O. Since several drives on a single server can deliver higher throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.
However, in real-life applications and virtualized environments, sequential I/O is not common (backups mainly) and most of the I/O operations are random. Thus, typical HDD throughput is usually much lower, close to 10-20 MB/s, according to statistics accumulated from hundreds of servers by a number of major hosting companies.
Based on these two observations, we recommend to use one of the following network configurations (or better):
- A 1 Gbit/s link per each 2 HDDs on the Hardware Node. Although if you have 1 or 2 HDDs on a Hardware Node, two bonded network adapters are still recommended for better reliability (see Setting Up Network Bonding).
- A 10 Gbit/s link per Hardware Node for the maximum performance.
The table below illustrates how these recommendations may apply to a Hardware Node with 1 to 6 HDDs:
|HDDs||1 GbE Links||10 GbE Links|
|1||1 (2 for HA)||1 (2 for HA)|
|2||1 (2 for HA)||1 (2 for HA)|
|3||2||1 (2 for HA)|
|4||2||1 (2 for HA)|
|5||3||1 (2 for HA)|
|6||3||1 (2 for HA)|
- For the maximum sequential I/O performance, we recommend to use one 1Gbit/s link per each hard drive, or one 10Gbit/s link per Hardware Node.
- It is not recommended to configure 1 Gbit/s network adapters to use non-default MTUs (e.g., 9000-byte jumbo frames). Such settings require switch configuration and often lead to human errors. 10 Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance.
- For maximum efficiency, use the
balance-xorbonding mode with the
layer3+4hash policy. If you want to use the
802.3adbonding mode, also configure your switch to use the