5.4. Setting Up Network Bonding¶
Bonding multiple network interfaces together provides the following benefits:
- High network availability. If one of the interfaces fails, the traffic will be automatically routed to the working interface(s).
- Higher network performance. For example, two Gigabit interfaces bonded together will deliver about 1.7 Gbit/s or 200 MB/s throughput. The required number of bonded storage network interfaces may depend on how many storage drives are on the node. For example, a rotational HDD can deliver up to 1 Gbit/s throughput.
To configure a bonding interface, do the following:
/etc/modprobe.d/bonding.conffile containing the following line:
alias bond0 bonding
/etc/sysconfig/network-scripts/ifcfg-bond0file containing the following lines:
DEVICE=bond0 ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no BONDING_OPTS="mode=balance-xor xmit_hash_policy=layer3+4 miimon=300 downdelay=300 \ updelay=300" NAME="Storage net0" NM_CONTROLLED=no IPADDR=xxx.xxx.xxx.xxx PREFIX=24
Make sure to enter the correct values in the
balance-xormode is recommended, because it offers both fault tolerance and better performance. For more details, see the documents listed below.
Make sure the configuration file of each Ethernet interface you want to bond (e.g.,
/etc/sysconfig/network-scripts/ifcfg-eth0) contains the lines shown in this example:
DEVICE="eth0" BOOTPROTO=none NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" HWADDR=xx:xx:xx:xx:xx:xx MASTER=bond0 SLAVE=yes USERCTL=no
Bring up the
# ifup bond0
dmesgoutput to verify that
bond0and its slave Ethernet interfaces are up and links are ready.
For more information on network bonding, see the Red Hat Enterprise Linux Deployment Guide and Linux Ethernet Bonding Driver HOWTO.