Recent Tips

How to Setup and Configure Network Bonding in RHEL/CentOS 7

Image result for How to Setup and Configure Network Bonding in RHEL/CentOS 7

Today we will see how to Bonding in Linux machine. Bonding is a method for aggregating multiple network interfaces into a single logical interface on the machine which internally combines two or more interface cards to work to fault tolerance in case any NIC get fail, load balancing among member NIC used for Bonding and increases throughput beyond single NIC can sustain. This is a method to provide more reliable network connection that enhances connectivity like we used RAID or mirroring for Linux Disk. This way Linux kernel will automatically detect any interface failure and will work accordingly to provide continuous connectivity of application with user end. In This post, we will see how we could configure Network Bonding in CentOS7/RHEL7.

But before we configure Bonding in Linux, I think we should understand how bonding works, type of network bonding mode in Linux.

Read More: How To Crack WPA/WPA2 Wi-Fi Passwords Using Aircrack-Ng In Kali

This whole thing is based on LACP (Link aggregation control protocol) for Ethernet which comes under LAG (Link aggregation group) which could combine the number of physical ports put together to form a single virtual high reliable, efficient, high bandwidth path for data connectivity. You can refer Wiki page of Link aggregation for great detail. Linux or Ethernet bonding is just a small part of it, which used over Ethernet card for network connectivity.
Linux bonding driver used for Link aggregation on Nics in Linux operating systems. We can find out information regarding Linux bonding driver with below command.
modinfo bonding
In above command’s output, it shows many of things which used to know the basic concept of Bonding in Linux systems. But I am not sure, how many peoples like to insight concepts of Bonding. Most readers only like to know to configure steps of Linux bonding, But I am covering bit more details for Bonding modes which are like methods or algorithm used in Bonding driver to implement bonding on Linux machine.

  1. Round-robin (balance-rr) or 0
  2. Network packets transmit in sequential order from the first available network interface (NIC) slave towards the last one. This way every Ethernet card used for send and receive network packets. So this mode has load balancing and faults tolerance feature.

  3. Active-backup (active-backup) or 1
  4. In this only one slave Ethernet card in the bond is active. Another slave Ethernet cards only become active only when the active slave fails. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.

  5. XOR (balance-xor) or 2
  6. Network packets transmit based on the hash of network packet’s source and destination. Default algorithm considers only MAC addresses (layer2), In this case, we can only use one Ethernet card at a time. Newer versions allow selection of additional policies based on IP addresses (layer2+3), In this case we can use multiple Ethernet cards combine to form Bond with help of alias over virtual Bond interface and TCP/UDP port numbers (layer3+4), This used when your application used multiple ports to transmit data over Bond channel. This select same NIC slave for destination MAC address, IP address, or IP address and port combination, respectively, This also should have same capability over switch level at same time. This mode provides load balancing and faults tolerance feature.

  7. Broadcast (broadcast) or 3
  8. This transmits network packets over all slave network interfaces. This mode provides fault tolerance feature.

  9. IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP) or 4
  10. This creates aggregation groups that share with same speed and duplex settings and also Utilizes all slave network interfaces in the active aggregator group. This mode is behave like the XOR mode mentioned above and also supports the same balancing policies. Link set up dynamically between two LACP-supporting peers.

    Read More: How to test open UDP ports

  11. Adaptive transmit load balancing (balance-tlb) or 5
  12. This mode does not require any special network-switch support. Outgoing network packet traffic distributed according to the current load on each slave interface. Incoming traffic is received by one currently designated slave network interface. If receiving slave fails, another slave takes over the MAC address of the failed slave.

  13. Adaptive load balancing (balance-alb) or 5
  14. This has balance-tlb mentioned above plus receive load balancing (rlb) for IPV4 traffic only, and not require any special network switch support. Receive load balancing is achieved through ARP negotiation. Bonding driver intercepts ARP Replies sent by local system on their way out and will overwrite source hardware address with unique hardware address of one of slaves interface in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
For this setup, we are using…

SetUP

[root@host1 ~]# uname -r
3.10.0-693.5.2.el7.x86_64
[root@host1 ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 
So now we will see how we could configure network Bonding in Linux machine. For same we are using the above-mentioned machine and two interfaces to create Ethernet Bond.
To create a bonding interface channel, we have created a file in /etc/sysconfig/network-scripts/ directory named ifcfg-bondN, we could some other name condition is that we have to use ifcfg-. Mostly used bondN. N could number of the interface, like 0 or 1. Here this channel combine of two interfaces ens0 and ens1.
We also need to edit ifcfg-ens0 and ifcfg-ens1 file as well in such below manner.
Bond File — /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
NAME=bond0
TYPE=Bond
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.122.150
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
BONDING_MASTER=yes
BONDING_OPT="mode=balance-rr"
First Ethernet File — /etc/sysconfig/network-scripts/ifcfg-ens0
DEVICE=ens0
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
Second Ethernet File — /etc/sysconfig/network-scripts/ifcfg-ens1
DEVICE=ens1
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
Now we just need to restart network service with below command.
#systemctl restart network
Now we could see in ifconfig command output, that bond channel interface bond0 has been configured.
[root@host1 ~]# ifconfig 
bond0: flags=5187  mtu 1500
        inet 192.168.122.150  netmask 255.255.255.0  broadcast 192.168.122.255
        inet6 fe80::5054:ff:fe5f:d028  prefixlen 64  scopeid 0x20
        ether 52:54:00:5f:d0:28  txqueuelen 1000  (Ethernet)
        RX packets 16  bytes 2829 (2.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21  bytes 3690 (3.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens0: flags=6211  mtu 1500
        ether 52:54:00:5f:d0:28  txqueuelen 1000  (Ethernet)
        RX packets 116  bytes 22276 (21.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 140  bytes 23402 (22.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens1: flags=6211  mtu 1500
        ether 52:54:00:5f:d0:28  txqueuelen 1000  (Ethernet)
        RX packets 107  bytes 19772 (19.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 116  bytes 22411 (21.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Read More: How to get someone's IP address and trace Location

We can also see details in dynamic proc file /proc/net/bonding/bond0
[root@host1 ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: ens0
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 52:54:00:5f:d0:28
Slave queue ID: 0

Slave Interface: ens1
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 52:54:00:5f:d0:26
Slave queue ID: 0
Also, check bond details with nmcli command.
[root@host1 ~]# nmcli con show
NAME         UUID                                  TYPE            DEVICE 
System ens0  013f5319-f084-01ea-d35e-8e1e492224ee  802-3-ethernet  ens0   
System ens1  d18b6429-133f-4947-3b25-4482c7f9d5e7  802-3-ethernet  ens1   
bond0        ad33d8b0-1f7b-cab9-9447-ba07f855b143  bond            bond0