- Объединение сетевых интерфейсов в Linux. Настройка bonding
- Агрегация сетевых интерфейсов в Ubuntu и Debian
- Режимы работы
- Объединение сетевых интерфейсов в CentOS, RHEL и Fedora
- Заключение
- How to Configure Network Bonding or Teaming in Ubuntu
- If You Appreciate What We Do Here On TecMint, You Should Consider:
- Bonding
- Shutdown / Unconfigure Existing Interfaces
- Configuration — Example 1
- Configuration — Example 2 («Laptop-Mode»)
- Configuration — Example 3 («Laptop mode», mostly as per documentation — Debian 9 «stretch»)
- Configuration — Example 4 — (very complex server setup) with LACP Bonded trunk and VLANs split out of the trunk, different MTU’s on the VLANs
- bridging the bond
- Using systemd-networkd
- Enabling systemd-networkd
- Configuring the bond device
- Add interfaces to the bond/lag
- Giving the bond an IP
- Actualise the settings
- enabling bridging for virtual machines
- udev renaming issue
- Testing / Debugging
- Debugging ifenslave
- Additional Note For Debian Lenny On Sparc
- Startup / Configure New Interfaces
- Change active slave
Объединение сетевых интерфейсов в Linux. Настройка bonding
Объединение сетевых интерфейсов(Bonding) – это механизм, используемый Linux-серверами и предполагающий связь нескольких физических интерфейсов в один виртуальный, что позволяет обеспечить большую пропускную способность или отказоустойчивость в случае повреждения кабеля. В данном руководстве мы разберем реализацию объединения интерфейсов в Linux для Ubuntu/Debian и CentOS/RHEL/Fedora.
Агрегация сетевых интерфейсов в Ubuntu и Debian
Важно! Если у вас используется Ubuntu версии 17.10 и выше, то необходимо установить пакет ifupdown или настраивать агрегацию каналов нужно через netplan
Прежде всего нужно установить модуль ядра для поддержки объединения и при помощи команды modprobe проверить, загружен ли драйвер.
В более старых версиях Debian или Ubuntu может потребоваться установка пакета ifenslave:
Для создания связанного интерфейса из двух физических сетевых карт вашей системы выполните следующую команду. К сожалению, при использовании такого метода объединение интерфейсов не сохраняется после перезагрузки системы:
Для создания постоянного связанного интерфейса типа mode 0 (ниже мы разберем эти типы более подробно), нужно отредактировать файлы конфигурации сетевых интерфейсов. Откройте с помощью любого текстового редактора, например nano, файл /etc/network/interfaces , как показано в следующем фрагменте (замените IP-адрес, маску подсети, шлюз и DNS-серверы на используемые в вашей сети).
Чтобы активировать объединенный интерфейс, перезапустите сетевую службу, отключите физические интерфейсы и включите объединенный интерфейс, либо перезагрузите машину, чтобы ядро определило новый объединенный интерфейс.
Настройки связанного интерфейса можно проверить при помощи следующих команд:
Подробную информацию об объединенном интерфейсе можно получить, просмотрев содержимое следующего файла ядра командой cat:
Для отладки ошибок можно использовать команду tail
Проверку параметров сетевой карты можно выполнить при помощи инструмента mii-tool:
Режимы работы
mode=0 (balance-rr)
При этом методе объединения трафик распределяется по принципу «карусели»: пакеты по очереди направляются на сетевые карты объединённого интерфейса. Например, если у нас есть физические интерфейсы eth0, eth1, and eth2, объединенные в bond0, первый пакет будет отправляться через eth0, второй — через eth1, третий — через eth2, а четвертый снова через eth0 и т.д.
mode=1 (active-backup)
Когда используется этот метод, активен только один физический интерфейс, а остальные работают как резервные на случай отказа основного.
mode=2 (balance-xor)
В данном случае объединенный интерфейс определяет, через какую физическую сетевую карту отправить пакеты, в зависимости от MAC-адресов источника и получателя.
mode=3 (broadcast) Широковещательный режим, все пакеты отправляются через каждый интерфейс. Имеет ограниченное применение, но обеспечивает значительную отказоустойчивость.
mode=4 (802.3ad)
Особый режим объединения. Для него требуется специально настраивать коммутатор, к которому подключен объединенный интерфейс. Реализует стандарты объединения каналов IEEE и обеспечивает как увеличение пропускной способности, так и отказоустойчивость.
mode=5 (balance-tlb)
Распределение нагрузки при передаче. Входящий трафик обрабатывается в обычном режиме, а при передаче интерфейс определяется на основе данных о загруженности.
mode=6 (balance-alb)
Адаптивное распределение нагрузки. Аналогично предыдущему режиму, но с возможностью балансировать также входящую нагрузку.
Объединение сетевых интерфейсов в CentOS, RHEL и Fedora
Создайте новый файл bonding.conf в директории /etc/modprobe.d/ . Имя может быть любым, но расширение должно быть .conf. Вставьте в этот файл следующую строку:
Такая строка в файле /etc/modprobe.d/bonding.conf требуется для каждого bond интерфейса.
Для агрегации интерфейсов создайте в директории /etc/sysconfig/network-scripts/ файл конфигурации с именем ifcfg-bond0. Вот пример содержимого файла конфигурации (IP-адреса в вашей системе могут отличаться):
После создания объединённого интерфейса нужно настроить его и связанные с ним сетевые карты, добавив в файлы конфигурации директивы MASTER и SLAVE. Для всех связанных интерфейсов эти файлы могут быть почти одинаковыми. Например, у двух интерфейсов eth0 и eth1, связанных в один, они могут иметь следующий вид. Отредактируйте их, как показано ниже.
Для eth0
Значение этих директив следующее:
DEVICE: определяет имя устройства
USERCTL: определяет, может ли пользователь управлять интерфейсом (в данном случае нет)
ONBOOT: определяет, включать ли интерфейс при загрузке
MASTER: есть ли у этого устройства ведущий интерфейс (здесь это bond0)
SLAVE: работает ли это устройство каки ведомое
BOOTPROTO: Определяет получение IP-адреса по DHCP. При статическом IP-адресе устанавливается значение none
Перезагрузите сетевую службу и проверьте конфигурацию командой ifconfig.
Заключение
Объединение сетевых интерфейсов — удобный и функциональный механизм для обеспечения качественной и бесперебойной работы вашей сети. Надеемся, данное руководство было полезным. Более подробную информацию об используемых командах можно получить в соответствующих man-страницах.
Если вы нашли ошибку, пожалуйста, выделите фрагмент текста и нажмите Ctrl+Enter.
Источник
How to Configure Network Bonding or Teaming in Ubuntu
Network Interface Bonding is a mechanism used in Linux servers which consists of binding more physical network interfaces in order to provide more bandwidth than a single interface can provide or provide link redundancy in case of a cable failure. This type of link redundancy has multiple names in Linux, such as Bonding, Teaming or Link Aggregation Groups (LAG).
To use network bonding mechanism in Ubuntu or Debian based Linux systems, first you need to install the bonding kernel module and test if the bonding driver is loaded via modprobe command.
Check Network Bonding in Ubuntu
On older releases of Debian or Ubuntu you should install ifenslave package by issuing the below command.
To create a bond interface composed of the first two physical NCs in your system, issue the below command. However this method of creating bond interface is ephemeral and does not survive system reboot.
To create a permanent bond interface in mode 0 type, use the method to manually edit interfaces configuration file, as shown in the below excerpt.
Configure Bonding in Ubuntu
In order to activate the bond interface, either restart network service, bring down the physical interface and rise the bond interface or reboot the machine in order for the kernel to pick-up the new bond interface.
The bond interface settings can be inspected by issuing the below commands.
Verify Bond Interface in Ubuntu
Details about the bond interface can be obtained by displaying the content of the below kernel file using cat command as shown.
Check Bonding Information in Ubuntu
To investigate other bond interface messages or to debug the state of the bond physical NICS, issue the below commands.
Check Bond Interface Messages
Next use mii-tool tool to check Network Interface Controller (NIC) parameters as shown.
Check Bond Interface Link
The types of Network Bonding are listed below.
- mode=0 (balance-rr)
- mode=1 (active-backup)
- mode=2 (balance-xor)
- mode=3 (broadcast)
- mode=4 (802.3ad)
- mode=5 (balance-tlb)
- mode=6 (balance-alb)
The full documentations regarding NIC bonding can be found at Linux kernel doc pages.
If You Appreciate What We Do Here On TecMint, You Should Consider:
TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.
If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.
We are thankful for your never ending support.
Источник
- Bonding
This article will show how to «bond» two Ethernet connections together to create an auto failover interface.
First install the ifenslave package, necessary to enable bonding:
Shutdown / Unconfigure Existing Interfaces
Sometimes, ifdown doesn’t work, in that case use ifconfig eth0 down.
Configuration — Example 1
Modify the /etc/network/interfaces file:
Configuration — Example 2 («Laptop-Mode»)
Tie cable and wireless network interfaces (RJ45/WLAN) together to define a single, virtual (i.e. bonding) network interface (e.g. bond0).
As long as the network cable is connected, its interface (e.g. eth0) is used for the network traffic. If you pull the RJ45-plug, ifenslave switches over to the wireless interface (e.g. wlan0) transparently, without any loss of network packages.
After reconnecting the network cable, ifenslave switches back to eth0 («failover mode»).
From the outside (=network) view it doesn’t matter which interface is active. The bonding device presents its own software-defined (i.e. virtual) MAC address, different from the hardware defined MACs of eth0 or wlan0.
The dhcp server will use this MAC to assign an ip address to the bond0 device. So the computer has one unique ip address under which it can be identified. Without bonding each interface would have its own ip address.
Modify the /etc/network/interfaces file:
Note: The configuration above has been found working on Debian 6 and later versions. The last verified version is Debian 9.8 (*). The configuration is somewhat contrary to the documentation of interfaces, ifup and ifenslave and the examples under /usr/share/doc/ifenslave/examples/.
Theoretically only the bond0 interface should have the auto attribute. ifup bond0 will bring up the slaves automatically (as documentation says). This is partially true but obviously the configuration options of the slaves are ignored. E.g. wlan0 is brought up without starting wpa_supplicant and the bond-primary setting of eth0 is ignored. (TODO: Is this a bug in ifenslave?)
It seems the slaves must be brought up before bond0 to include their configuration options. To do so via the /etc/init.d/networking script, their definitions must be before the bond0 definition and the auto attributes have to be set.
Of course, they must not be started again when bond0 starts. The option bond-slaves none disables this.
The options bond-master, bond-primary and bond-mode have to be repeated consistently for each slave.
There will be warnings «ifup: interface xyz already configured», but at least it works.
(*) With newer Debian versions the names of the network devices (may) have changed, depending on the upgrade path. Installations from scratch now use «predictable network interface names» (https://wiki.debian.org/NetworkConfiguration#Predictable_Network_Interface_Names). To find the names of your interfaces you will want to look here: $ ls /sys/class/net/
This document still uses the traditional names.
Configuration — Example 3 («Laptop mode», mostly as per documentation — Debian 9 «stretch»)
This is a way to bring up a laptop mode with automatic failover between wired and wireless, with wired preferred if both are available, based on the documentation. However, the documentation example is not complete and not fully correct. Specificly, the changes to the example (usr/share/doc/ifenslave/examples/ethernet+wifi) are:
the eth0 stanza is required, otherwise it will work initially, but bond0 will remove eth0 for good if the eth0 link goes back down after coming up once (like when going from wireless to wired and then back to wireless), instead of just disabling it until it’s link comes back up.
So here is the /etc/network/interfaces marvel:
If you use DHCP or some other service, you need to change the «bond0» stanza accordingly, but the other interfaces must remain «manual», as they aren’t supposed to get an IP address.
Configuration — Example 4 — (very complex server setup) with LACP Bonded trunk and VLANs split out of the trunk, different MTU’s on the VLANs
Example is Debian 10 with lots of hard-to-derive-from-man-pages syntax. You are NOT going to want this config on a desktop computer, this is likely only going to be used in a datacenter with a properly configured (w/LACP — aka 8021q) upstream network switch Example /etc/network/interfaces file:
bridging the bond
If you want to use the bond in a bridge, simply add the bridge lines as per normal to your /etc/network/interfaces file. Change the bond interface to manual and use it as the bridge interface. Here’s a sample bridged bond interfaces file:
Using systemd-networkd
This method does not use the package ifenslave which is mentioned above. If your computer is using systemd, and your network cards are currently working, you don’t need anything else.
Note that as is common on unix-type operating systems, case matters — «Bond» is different from «bond» and «Name» is not the same as «name».
Enabling systemd-networkd
If you are not currently using systemd-networkd, you need to enable it.
Configuring the bond device
Create a file ending in .netdev in /etc/systemd/network. Name this after the bonded interface name you want to use (e.g. bond1.netdev).
This example assumes 802.3ad or LACP bonding, for more information see the systemd.netdev manpage and/or the kernel documentation.
Most systems should work with 802.3ad and this is probably the mode you want as it has both network cards working together to give you double the throughput. However, if it doesn’t work in your case, you can try another mode, such as active-backup (used in the ifenslave example above).
Note that systemd is always creating a default bond0 interface with balance round robin mode and the mode can’t be changed. So to use any other mode create bond1 or another name for the interface. Interface bond0 with mode 802.3ad simply won’t work.
Add interfaces to the bond/lag
There are two ways you can do this. One is to create a .network file for each network interface plus one for the bonded network. The other is to describe the network interfaces in the bonded network’s file. Here we’ll use the latter method.
Create a file ending in .network in /etc/systemd/network using the same name as previously (e.g. bond1.network).
systemd-networkd uses a matching system to decide which interface to use. You could use name-based matching here if you like, but do not use mac-based matching as this could cause confusion with the bond changing mac addresses.
This example uses pci-id based matching. To find the addresses for your network cards, use:
then use that information to create the .network file.
Another option is to simply use the names of the network interfaces by replacing the Path= line with Name= . You can also use wildcards, and you can specify both/all devices in a single file:
Giving the bond an IP
Create a file ending in .network in /etc/systemd/network. The name (obviously) should not already be used. This tell systemd how to bring up the bonded network. For a static IP address, you could use:
For DHCP (e.g. for a laptop where you could use wireless and/or wired connections) try:
Actualise the settings
If your network was using /etc/network/interfaces before setting up the bonding, rename the file to stop it from being used:
At this point I recommend rebooting the system. This is the easiest way to clear out any previous network configurations and it tests that systemd-networkd starts as expected. The network should come up with bonding active. You can verify this with:
You should see 4 devices, lo, your two physical network interfaces (marked as «SLAVE»), and the bond1 device. Only the bond1 device should have an ip address. It should also be marked as «MASTER».
If you need to make further changes later, or fix problems with your current setup, from now on you can simply restart systemd-networkd after updating the /etc/systemd/network files.
enabling bridging for virtual machines
I’ve added this section because it’s not immediately obvious that your existing network bridge configuration probably won’t work with systemd-networkd. Prior to setting up my network bond, I was using the bridge utils to create a br0 device using /etc/network/interfaces. Since I’ve removed that file, I needed a new way to set up the bridge.
Fortunately systemd-networkd is multi-talented and quite adept at handling network bridges. All you need to do is define the bridge and give it the appropriate characteristics.
Like the bond0 device, you need to create a .netdev file to define the device. I created br0.netdev as follows:
Then I link it to the network bond I’d defined earlier using br0.network:
Finally, I change the management .network file to refer to br0 instead of bond1. Because the previous definitions have made bond1 a slave to br0, this results in the bridge being brought up properly.
udev renaming issue
You will likely only see UDEV rules for your network devices if you have upgraded from previous versions of Debian. New installations name the network cards after their pci addresses. The rules are used to preserve the legacy names for devices (e.g. eth0) in case they are being used elsewhere.
If you are confident that you are not using the legacy names, you can simply remove the file described below.
«udev» assign network adapter names as per
where rule typically looks like this:
The problem with bonding is that two or more NICs may have the very same MAC address which confuses udev when it tries to (re)name adapters as per their MACs and fails because another card with this MAC already exists. When it happens NIC may be left named like «rename2» instead of «eth0» etc.
Possible solution is to change udev rule to assign network interface names as per NICs PCI IDs instead of MAC addresses. This can be done by replacing
with something like
in the file «70-persistent-net.rules».
Corresponding PCI IDs can be found in dmesg:
Where one can look for line fragment like this:
But this is not recommended as it will not find, for example, wireless devices or devices not using legacy names.
The preferred alternative is to find PCI IDs using » lspci -D | grep Ether«:
Note that on modern systems, you can translate the PCI address to the network name by using the two middle numbers expressed in base10. In the above example, the controller 0000:04:00.0 would be enp4s00.
Testing / Debugging
In order to get some insight what is happening behind the scenes while experimenting a small script to show some information about the bonding device may be helpful.
Debugging ifenslave
The bonding mechanism is based on a kernel module named bonding which exposes its interface via the virtual /sys filesystem (e.g. /sys/class/net/bond0/*).
Setup and configuration is done in userland with shell-scripts:
- /etc/network/if-post-down.d/ifenslave
- /etc/network/if-up.d/ifenslave
- /etc/network/if-pre-up.d/ifenslave
These scripts are called on system initialization and shutdown (actually it is ifup which calls them). Their intention is to feed the kernel module with the appropriate parameters and settings.
If something with bonding fails at all (and the tip above doesn’t help) you may have a look what the scripts do step by step.
To enable verbose output, invoke ifup -a -v directly (instead of invoking /etc/init.d/networking). The -v option enables a log of all commands the scripts are executing. This gives at least a trace what is happening when.
Unfortunately this will not show the reactions of the kernel module (like possible error messages), because kernel (module) messages are reported via the syslog utility.
To get a real insight what is going on you have to do what is called invasive debugging. This means to add lines to the scripts at critical points to send a message to syslog.
Function sysfs_change_down in file /etc/network/if-pre-up.d/ifenslave
Additional Note For Debian Lenny On Sparc
(may be applicable on other architectures as well)
Without this file, you will get a warning when starting up the bonded interface similar to this:
Startup / Configure New Interfaces
And more, if you use a Lenny environment which has been upgraded from Etch, it is strongly recommended to check the result of the following command to check the bonding device mode, because configuration files for Etch and older versions do not work for Lenny and later releases.
1. Ping to other system in a terminal
2. Disconnect the active network cable and watch the ping result, the network should be resumed in few seconds
3. Reconnect the disconnected network cable, wait for 30 seconds to let the ARP table being updated
4. Disconnect another network cable and watch the ping result, the network should be resumed in few seconds
Change active slave
1. Use ifenslave to change the active slave. Below example will set eth0 as active slave
Источник