Check bond interface linux

How to Configure Network Bonding or Teaming in Ubuntu

Network Interface Bonding is a mechanism used in Linux servers which consists of binding more physical network interfaces in order to provide more bandwidth than a single interface can provide or provide link redundancy in case of a cable failure. This type of link redundancy has multiple names in Linux, such as Bonding, Teaming or Link Aggregation Groups (LAG).

To use network bonding mechanism in Ubuntu or Debian based Linux systems, first you need to install the bonding kernel module and test if the bonding driver is loaded via modprobe command.

Check Network Bonding in Ubuntu

On older releases of Debian or Ubuntu you should install ifenslave package by issuing the below command.

To create a bond interface composed of the first two physical NCs in your system, issue the below command. However this method of creating bond interface is ephemeral and does not survive system reboot.

To create a permanent bond interface in mode 0 type, use the method to manually edit interfaces configuration file, as shown in the below excerpt.

Configure Bonding in Ubuntu

In order to activate the bond interface, either restart network service, bring down the physical interface and rise the bond interface or reboot the machine in order for the kernel to pick-up the new bond interface.

The bond interface settings can be inspected by issuing the below commands.

Verify Bond Interface in Ubuntu

Details about the bond interface can be obtained by displaying the content of the below kernel file using cat command as shown.

Check Bonding Information in Ubuntu

To investigate other bond interface messages or to debug the state of the bond physical NICS, issue the below commands.

Check Bond Interface Messages

Next use mii-tool tool to check Network Interface Controller (NIC) parameters as shown.

Check Bond Interface Link

The types of Network Bonding are listed below.

  • mode=0 (balance-rr)
  • mode=1 (active-backup)
  • mode=2 (balance-xor)
  • mode=3 (broadcast)
  • mode=4 (802.3ad)
  • mode=5 (balance-tlb)
  • mode=6 (balance-alb)

The full documentations regarding NIC bonding can be found at Linux kernel doc pages.

If You Appreciate What We Do Here On TecMint, You Should Consider:

TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.

If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.

We are thankful for your never ending support.

Источник

Бонд. Джеймс Бонд или объединение сетевых интерфейсов (бондинг)

Подобная статья уже была от автора AccessForbidden: «Объединение сетевых интерфейсов в linux».
Эта статья именно о настройке, и установке. Пишу её потому, что недавно столкнулся с проблемами установки и настройки бондинга.

Читайте также:  Windows update client versions

Ситуация была такова: Был стааренький компьютер на четырёх-поточном пентиуме, с гигабайтом ОЗУ, и встроенным гигабитным интерфейсом на мат.плате. Он был мне как шлюзом, так медиацентром, и NAS’ом. Но вот, когда уже дома появилось N-ное количество девайсов (телевизор, смартфоны и компьютеры) пропускной способности начало не хватать. Но была у меня хорошая интеловская сетевая карточка (тоже гигабитная) и я решил погуглить на тему объединения интерфейсов

Вообще, Ethernet bonding (если быть точнее) — это объединение двух или более физических сетевых интерфейсов в один виртуальный для обеспечения отказоустойчивости и повышения пропускной способности сети. Или (простым языком говоря)Raid для сетевых карт. Только их «заточенность» на пропускную способность, на одинакового производителя- не важна

Ну, для начала, нужно убедится, нуждаетесь вы в этом или нет (скорее всего, если вы это читаете, значит вам это возможно нужно. ). Перед тем, как начнём, предупреждаю: делать нужно всё на сервере и от рута.

Итак, начнём!

Вставляем сет.карту, если не вставали. ну и подключаем к свитчу (коммутатору) или роутеру обе карты

Этап подготовки:

Теперь, нам нужно поставить ifenslave. на данный момент, актуальна версия 2.6. Ставим:

Теперь, нам нужно выключить интерфейсы, которые мы объединяем (в моём случае, это — eth0, eth1).

Ну и останавливаем сеть:

Этап настройки:

Теперь нам нужно настроить файл /etc/network/interfaces

(я лично пользуюсь «нано»).

Поскольку показываю, как делал я, у меня он

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto usb0
allow-hotplug usb0
iface usb0 inet dhcp

auto eth0
iface eth0 inet static
address 192.168.0.1
netmask 255.255.255.0
network 192.168.0.0

auto eth1
iface eth1 inet dhcp

И привёл я его к

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto usb0
allow-hotplug usb0
iface usb0 inet dhcp

iface bond0 inet static
address 192.168.0.1
netmask 255.255.255.0
network 192.168.0.0

slaves eth0 eth1
bond-mode balance-rr
bond-miimon 100
bond-downdelay 200
bond-updelay 200

Хочу обратить ваше внимание на:

Первое — если у вас dhcp сервер, то в /etc/default/isc-dhcp-server, в Interfaces я указал bond0. так и с остальными серверами.
Второе — тоже про dhcp. если у вас оный сервер, то в address, netmask, network bond’а0, указываем те же параметры что и у интерфейса на который до этого, работал dhcp
Третье — должен быть только bond0 (0 — в данном случае. Кстати, их может быть куча). Интерфейсы, которые мы объединили написав в строку slaves, мы убираем.

После сделанного пишем (в терминале уже):

Только его!
«включаем» сеть. Кстати на ошибки можно не обращать внимания. Они не критичны.
Можем перезагрузится.

bond0 Link encap:Ethernet HWaddr 00:16:e6:4d:5e:05
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::216:e6ff:fe4d:5e05/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:33518 errors:0 dropped:0 overruns:0 frame:0
TX packets:30062 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6687125 (6.3 MiB) TX bytes:17962008 (17.1 MiB)

eth0 Link encap:Ethernet HWaddr 00:16:e6:4d:5e:05
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:16630 errors:0 dropped:0 overruns:0 frame:0
TX packets:15031 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3288730 (3.1 MiB) TX bytes:8966465 (8.5 MiB)
Interrupt:43 Base address:0x6000

eth1 Link encap:Ethernet HWaddr 00:16:e6:4d:5e:05
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:16888 errors:0 dropped:0 overruns:0 frame:0
TX packets:15031 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3398395 (3.2 MiB) TX bytes:8995543 (8.5 MiB)
Interrupt:17 Memory:e1080000-e10a0000

Читайте также:  Windows 10 фоновые приложения что отключить

Можно ещё добавлять интерфейсы bond0:0 и т.д.

Подходящий для вас режим, указывайте в /etc/network/interfaces в строке: bond-mode.

Источник

Ubuntu Documentation

Bonding, also called port trunking or link aggregation means combining several network interfaces (NICs) to a single link, providing either high-availability, load-balancing, maximum throughput, or a combination of these. See Wikipedia for details.

Interface Configuration

Step 1: Ensure kernel support

Before Ubuntu can configure your network cards into a NIC bond, you need to ensure that the correct kernel module bonding is present, and loaded at boot time.

Edit your /etc/modules configuration:

Ensure that the bonding module is loaded:

Note: Starting with Ubuntu 9.04, this step is optional if you are configuring bonding with ifup/ifdown. In this case, the bonding module is automatically loaded when the bond interface is brought up.

Step 2: Configure network interfaces

Ensure that your network is brought down:

Then load the bonding kernel module:

Now you are ready to configure your NICs.

A general guideline is to:

  1. Pick which available NICs will be part of the bond.
  2. Configure all other NICs as usual
  3. Configure all bonded NICs:
    1. To be manually configured
    2. To join the named bond-master
  4. Configure the bond NIC as if it were a normal NIC
  5. Add bonding-specific parameters to the bond NIC as follows.

Edit your interfaces configuration:

For example, to combine eth0 and eth1 as slaves to the bonding interface bond0 using a simple active-backup setup, with eth0 being the primary interface:

The bond-primary directive, if needed, needs to be part of the slave description (eth0 in the example), instead of the master. Otherwise it will be ignored.

As another example, to combine eth0 and eth1 using the IEEE 802.3ad LACP bonding protocol:

The bond statements in 12.04 are dashed instead of underscored. The config above is updated based on Stéphane Graber’s bonding example at : http://www.stgraber.org/download/complex-interfaces

The configuration as provided above works with Ubuntu 12.10 out of the box — michel-drescher, 10 Nov 2012

For bonding-specific networking options please consult the documentation available at BondingModuleDocumentation.

Finally, bring up your network again:

Checking the bonding interface

Link information is available under /proc/net/bonding/. To check bond0 for example:

Bringing up/down bonding interface

To bring the bonding interface, run

To bring down a bonding interface, run

Ethernet Bonding modes

Ethernet bonding has different modes you can use. You specify the mode for your bonding interface in /etc/network/interfaces. For example:

Descriptions of bonding modes

Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance. Mode 1

Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode. Mode 2

XOR policy: Transmit based on selectable hashing algorithm. The default policy is a simple source+destination MAC address algorithm. Alternate transmit policies may be selected via the xmit_hash_policy option, described below. This mode provides load balancing and fault tolerance. Mode 3

Читайте также:  Virtualbox linux подключить usb

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance. Mode 4

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

Prerequisites:

  1. Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
  2. A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

Mode 5

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Prerequisites:

  • Ethtool support in the base drivers for retrieving the speed of each slave.

Mode 6

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

Descriptions of balancing algorithm modes

The balancing algorithm is set with the xmit_hash_policy option.

Possible values are:

layer2 Uses XOR of hardware MAC addresses to generate the hash. This algorithm will place all traffic to a particular network peer on the same slave.

layer2+3 Uses XOR of hardware MAC addresses and IP addresses to generate the hash. This algorithm will place all traffic to a particular network peer on the same slave.

layer3+4 This policy uses upper layer protocol information, when available, to generate the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection will not span multiple slaves.

encap2+3 This policy uses the same formula as layer2+3 but it relies on skb_flow_dissect to obtain the header fields which might result in the use of inner headers if an encapsulation protocol is used.

encap3+4 This policy uses the same formula as layer3+4 but it relies on skb_flow_dissect to obtain the header fields which might result in the use of inner headers if an encapsulation protocol is used.

The default value is layer2. This option was added in bonding version 2.6.3. In earlier versions of bonding, this parameter does not exist, and the layer2 policy is the only policy. The layer2+3 value was added for bonding version 3.2.2.

UbuntuBonding (последним исправлял пользователь sbright 2017-11-07 14:58:45)

The material on this wiki is available under a free license, see Copyright / License for details
You can contribute to this wiki, see Wiki Guide for details

Источник

Оцените статью