- bridge-utils vs OVS
- Open vSwitch
- Contents
- Installation
- Configuration
- Overview
- Bridges
- Bonds
- VLANs Host Interfaces
- Rapid Spanning Tree (RSTP)
- Note on MTU
- Examples
- Example 1: Bridge + Internal Ports + Untagged traffic
- Example 2: Bond + Bridge + Internal Ports
- Example 3: Bond + Bridge + Internal Ports + Untagged traffic + No LACP
- Example 4: Rapid Spanning Tree (RSTP) — 1Gbps uplink, 10Gbps interconnect
- Multicast
- Using Open vSwitch in Proxmox
- We Love Servers.
bridge-utils vs OVS
Привет всем. Вот думаю что лучше выбрать Linux-bridge vs OVS для виртуализации на proxmox. С одной стороны первый проще и функционала особо мне не надо, кроме как на сетевку вешать виртуалки, а с другой стороны интересует производительность, может кто поделится историями успеха, может все же лучше осилить ovs или в данной ситуации он особо не нужен ?
Если можно без него, то лучше без него. Успех его внедрения есть. Смысла в простых случаях нет. Только словить какую-нибудь неприятность, тк это вещь новая и необкатанная в вирт системах. Если хочется побыть бесплатным тестером, то ок.
понял, спасибо, к этому варианту больше склоняюсь
если не нужен vlan-функционал, то используй linux-bridge.
bridge-utils позволяют получить теги на vnic. ovs позволяет фильтровать теги на портах, к которым подключен vnic. Это можно решить и через bridge-utils набором vnic из разных вланов.
В ovirt 4.0 ovs появился как основной, а в 4.1 стал experimental. Видимо, фидбек плохой.
Даже с vlan функционалом бриджи справляются неплохо. Вот дальше — уже вопрос.
Если ты не хостер и не большая инсталяция — имхо не нужен. Тем более в проксе. Хотя есть те, кто на нем умудряются продавать услуги впс.
Если есть контейнеры/докер и др. виртуализация, то ovs рулит.
Источник
Open vSwitch
Open vSwitch (openvswitch, OVS) is an alternative to Linux native bridges, bonds, and vlan interfaces. Open vSwitch supports most of the features you would find on a physical switch, providing some advanced features like RSTP support, VXLANs, OpenFlow, and supports multiple vlans on a single bridge. If you need these features, it makes sense to switch to Open vSwitch.
Contents
Installation
Update the package index and then install the Open vSwitch packages by executing:
Configuration
Overview
Open vSwitch and Linux bonding and bridging or vlans MUST NOT be mixed. For instance, do not attempt to add a vlan to an OVS Bond, or add a Linux Bond to an OVSBridge or vice-versa. Open vSwitch is specifically tailored to function within virtualized environments, there is no reason to use the native linux functionality.
Bridges
A bridge is another term for a Switch. It directs traffic to the appropriate interface based on mac address. Open vSwitch bridges should contain raw ethernet devices, along with virtual interfaces such as OVSBonds or OVSIntPorts. These bridges can carry multiple vlans, and be broken out into ‘internal ports’ to be used as vlan interfaces on the host.
It should be noted that it is recommended that the bridge is bound to a trunk port with no untagged vlans; this means that your bridge itself will never have an ip address. If you need to work with untagged traffic coming into the bridge, it is recommended you tag it (assign it to a vlan) on the originating interface before entering the bridge (though you can assign an IP address on the bridge directly for that untagged data, it is not recommended). You can split out your tagged VLANs using virtual interfaces (OVSIntPort) if you need access to those vlans from your local host. Proxmox will assign the guest VMs a tap interface associated with a vlan, so you do NOT need a bridge per vlan (such as classic linux networking requires). You should think of your OVSBridge much like a physical hardware switch.
When configuring a bridge, in /etc/network/interfaces, prefix the bridge interface definition with allow-ovs $iface. For instance, a simple bridge containing a single interface would look like:
Remember, if you want to split out vlans with ips for use on the local host, you should use OVSIntPorts, see sections to follow.
However, any interfaces (Physical, OVSBonds, or OVSIntPorts) associated with a bridge should have their definitions prefixed with allow-$brname $iface, e.g. allow-vmbr0 bond0
NOTE: All interfaces must be listed under ovs_ports that are part of the bridge even if you have a port definition (e.g. OVSIntPort) that cross-references the bridge.
Bonds
Bonds are used to join multiple network interfaces together to act as single unit. Bonds must refer to raw ethernet devices (e.g. eth0, eth1).
When configuring a bond, it is recommended to use LACP (aka 802.3ad) for link aggregation. This requires switch support on the other end. A simple bond using eth0 and eth1 that will be part of the vmbr0 bridge might look like this.
NOTE: The interfaces that are part of a bond do not need to have their own configuration section.
VLANs Host Interfaces
In order for the host (e.g. proxmox host, not VMs themselves!) to utilize a vlan within the bridge, you must create OVSIntPorts. These split out a virtual interface in the specified vlan that you can assign an ip address to (or use DHCP). You need to set ovs_options tag=$VLAN to let OVS know what vlan the interface should be a part of. In the switch world, this is commonly referred to as an RVI (Routed Virtual Interface), or IRB (Integrated Routing and Bridging) interface.
IMPORTANT: These OVSIntPorts you create MUST also show up in the actual bridge definition under ovs_ports. If they do not, they will NOT be brought up even though you specified an ovs_bridge. You also need to prefix the definition with allow-$bridge $iface
Setting up this vlan port would look like this in /etc/network/interfaces:
Rapid Spanning Tree (RSTP)
Open vSwitch supports the Rapid Spanning Tree Protocol, but is disabled by default. Rapid Spanning Tree is a network protocol used to prevent loops in a bridged Ethernet local area network.
WARNING: The stock PVE 4.4 kernel panics, must use a 4.5 or higher kernel for stability. Also, the Intel i40e driver is known to not work, older generation Intel NICs that use ixgbe are fine, as are Mellanox adapters that use the mlx5 driver.
In order to configure a bridge for RSTP support, you must use an «up» script as the «ovs_options» and «ovs_extras» options do not emit the proper commands. An example would be to add this to your «vmbr0» interface configuration:
It may be wise to also set a «post-up» script that sleeps for 10 or so seconds waiting on RSTP convergence before boot continues.
Other bridge options that may be set are:
- other_config:rstp-priority= Configures the root bridge priority, the lower the value the more likely to become the root bridge. It is recommended to set this to the maximum value of 0xFFFF to prevent Open vSwitch from becoming the root bridge. The default value is 0x8000
- other_config:rstp-forward-delay= The amount of time the bridge will sit in learning mode before entering a forwarding state. Range is 4-30, Default 15
- other_config:rstp-max-age= Range is 6-40, Default 20
You should also consider adding a cost value to all interfaces that are part of a bridge. You can do so in the ethX interface configuration:
Interface options that may be set via ovs_options are:
- other_config:rstp-path-cost= Default 2000 for 10GbE, 20000 for 1GbE
- other_config:rstp-port-admin-edge= Set to False if this is known to be connected to a switch running RSTP to prevent entering forwarding state if no BDPUs are detected
- other_config:rstp-port-auto-edge= Set to False if this is known to be connected to a switch running RSTP to prevent entering a forwarding state if no BDPUs are detected
- other_config:rstp-port-mcheck= Set to True if the other end is known to be using RSTP and not STP, will broadcast BDPUs immediately on link detection
You can look at the RSTP status for an interface via:
NOTE: Open vSwitch does not currently allow a bond to participate in RSTP.
Note on MTU
If you plan on using a MTU larger than the default of 1500, you need to mark any physical interfaces, bonds, and bridges with a larger MTU by adding an mtu setting to the definition such as mtu 9000 otherwise it will be disallowed.
Odd Note: Some newer Intel Gigabit NICs have a hardware limitation which means the maximum MTU they can support is 8996 (instead of 9000). If your interfaces aren’t coming up and you are trying to use 9000, this is likely the reason and can be difficult to debug. Try setting all your MTUs to 8996 and see if it resolves your issues.
Examples
Example 1: Bridge + Internal Ports + Untagged traffic
The below example shows you how to create a bridge with one physical interface, with 2 vlan interfaces split out, and tagging untagged traffic coming in on eth0 to vlan 1.
This is a complete and working /etc/network/interfaces listing:
Example 2: Bond + Bridge + Internal Ports
The below example shows you a combination of all the above features. 2 NICs are bonded together and added to an OVS Bridge. 2 vlan interfaces are split out in order to provide the host access to vlans with different MTUs.
This is a complete and working /etc/network/interfaces listing:
Example 3: Bond + Bridge + Internal Ports + Untagged traffic + No LACP
The below example shows you a combination of all the above features. 2 NICs are bonded together and added to an OVS Bridge. This example imitates the default proxmox network configuration but using a bond instead of a single NIC and the bond will work without a managed switch which supports LACP.
This is a complete and working /etc/network/interfaces listing:
Example 4: Rapid Spanning Tree (RSTP) — 1Gbps uplink, 10Gbps interconnect
WARNING: The stock PVE 4.4 kernel panics, must use a 4.5 or higher kernel for stability. Also, the Intel i40e driver is known to not work, older generation Intel NICs that use ixgbe are fine, as are Mellanox adapters that use the mlx5 driver.
This example shows how you can use Rapid Spanning Tree (RSTP) to interconnect your ProxMox nodes inexpensively, and uplinking to your core switches for external traffic, all while maintaining a fully fault-tolerant interconnection scheme. This means VM VM access (or possibly Ceph Ceph) can operate at the speed of the network interfaces directly attached in a star or ring topology. In this example, we are using 10Gbps to interconnect our 3 nodes (direct-attach), and uplink to our core switches at 1Gbps. Spanning Tree configured with the right cost metrics will prevent loops and activate the optimal paths for traffic. Obviously we are using this topology because 10Gbps switch ports are very expensive so this is strictly a cost-savings manoeuvre. You could obviously use 40Gbps ports instead of 10Gbps ports, but the key thing is the interfaces used to interconnect the nodes are higher-speed than the interfaces used to connect to the core switches.
This assumes you are using Open vSwitch 2.5+, older versions did not support Rapid Spanning Tree, but only Spanning Tree which had some issues.
To better explain what we are accomplishing, look at this ascii-art representation below:
This is a complete and working /etc/network/interfaces listing:
On our Juniper core switches, we put in place this configuration:
Multicast
Right now Open vSwitch doesn’t do anything in regards to multicast. Typically where you might tell linux to enable the multicast querier on the bridge, you should instead set up your querier at your router or switch. Please refer to the Multicast_notes wiki for more information.
Using Open vSwitch in Proxmox
Using Open vSwitch isn’t that much different than using normal linux bridges. The main difference is instead of having a bridge per vlan, you have a single bridge containing all your vlans. Then when configuring the network interface for the VM, you would select the bridge (probably the only bridge you have), and you would also enter the VLAN Tag associated with the VLAN you want your VM to be a part of. Now there is zero effort when adding or removing VLANs!
Источник
We Love Servers.
How to fix packet loss and latency in high bandwidth VPS servers by upgrading to OpenVSwitch
Virtualization is one of the most pervasive and transformational technologies in the hosting world to have come along in the last decade. Despite this, maintaining the efficient operation of virtual machines is not always easy. In this article, we’ll go into one of the most common causes of performance problems we see on virtual machines our customers are running, detailing the symptoms, troubleshooting process, and one very effective solution.
One of the most challenging questions we get asked from customers running a Virtual Machine Host Node is ‘Why is the network running badly on this new VM?’ Generally this starts a search of all network infrastructure to ensure proper operation of all aspects of the network, from the host node OS interface itself to the routers handling the traffic. More often than not, this exhaustive search turns up no obvious problems. At all points there are no troubles and all links are operating at full speed. Meanwhile the VM is experiencing high latency, packet loss, and lower than normal speed throughput when it should not be.
At this point, what is the problem? You can continue scouring for ghosts on the network or you can take a moment to consider one common component involved in every node running a hypervisor: the Linux bridge.
The Linux bridge is a neat piece of software that acts as a virtual ethernet switch. This “bridges” the virtual network interfaces for each Virtual Machine and the physical network interface card. Due to this, it performs the management of all traffic from virtual machines, and without it they would not have access to the network at all. Thankfully, this software is installed automatically due to being part of the Linux kernel. It is also simple to configure, and once it is active, you simply tell your VMs to use that as their network interface and the job is done.
However, this simplicity does come with some drawbacks. Most notably, as one of the older solutions to this network problem, it does not incorporate the currently known best practices for ensuring high performance software bridging.
There are some situations where the Linux bridge interface can become overwhelmed which can cause some of the symptoms we mentioned earlier:
Does your setup require > 10gbps networking or have a proportionately high number of VMs compared to most? How about your node being part of a single very large network with lots of mac or IP addresses? These types of situations that generate high packet counts, large amounts of broadcast traffic, or otherwise higher bandwidth requirements can cause the bridge to function at a subpar level or just outright crash.
So what is a good solution to this problem?
This is where we can look at a 3rd party utility to provide a better performing bridge platform that can not only perform better under a higher load, but can also offer additional options for more complicated networking. One of the most popular high performance linux bridge alternatives is OpenVSwitch.
OpenVSwitch provides an alternative to the standard built-in Linux bridge and can be used with many different operating systems and virtualization platforms. In our experience with Proxmox, using OpenVswitch instead of Linux bridge noticeably increased performance of our VMs while also enhancing stability.
If you want to give this a try, you will need to install and configure it. We’ll talk about how to get OpenVSwitch installed with Proxmox.
While OpenVSwitch is more difficult to set up than Linux bridge, it is actually still pretty easy.. FIrst all you need to do is install it from the default Proxmox repository via command line.
‘apt install openvswitch-switch’
It will install all the required packages and that is it for the installation. The configuration is a little more involved. Since you can’t run a Linux bridge and OpenVSwitch interfaces at the same time, swapping to this configuration will require any running VMs to be offline. Because of this, it is more convenient to setup OpenVSwitch when you first set up a server, before you have active virtual machines running.
A simple bridge setup with OpenVSwitch involves 3 interfaces. The physical interface configuration, the bridge interface, and the default vlan interface. Here is the simple configuration for the physical interface
First back up your network interfaces file at “/etc/network/interfaces” and or replace your appropriate physical interface with the following to create an OVSPort. We will be creating the bridge “vmbr0” in the next section, but it is also defined here. Also note there is a VLAN option. This configuration is assuming you have untagged traffic coming into the host node. Since OpenVSwitch wants VLAN aware traffic, we redefine all traffic to be on “vlan1” as the default.
auto ens6
allow-vmbr0 ens6
iface ens6 inet manual
ovs_bridge vmbr0
ovs_type OVSPort
ovs_options tag=1 vlan_mode=native-untagged
Next, you will need to define the actual bridge for your VMs to use. This part is very similar to Linux bridge, but it is using OpenVSwitch instead. We simply setup the bridge that is aware of the various OVS ports.
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports ens6 vlan1
Finally, the last interface is the vlan interface. As mentioned earlier, OpenVSwitch wants to use a VLAN aware layout. The above traffic will be routed to vlan1 and we define the actual interface here for the host node on the bridge.
allow-vmbr0 vlan1
iface vlan1 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=1
address ###.###.###.###
netmask ###.###.###.###
gateway ###.###.###.###
Once all 3 sections here are set up in your network interfaces file according to your network requirements, you should only have to reboot your server to activate the changes.
Now that we have the new OpenVSwitch bridge setup, you should be able to see it in your Proxmox control panel with appropriate OpenVSwitch (OVS) interfaces:
There is one more important thing you need to remember now when creating your virtual machines. Since the traffic needs to be vlan aware for the new bridge to function correctly, you need to have every VM attached to that vlan in their advanced configuration. Simply add the tag number 1 to each as you create it. That’s all there is to it.
This is a very simple configuration and implementation of OpenVSwitch and it is certainly capable of a lot more than just a bridge. To see more available options, such as Bonding or RSTP options, you can refer to the documentation here:
For people interested in learning more about the exact performance characteristics of openvswitch vs linux bridge, this PDF report from the Dept. of Electrical, Electronic and Information Engineering University of Bologna – Italy has a wealth of information on the subject:
In this article we talked about a common problem with virtual machine network performance. Often, this is a problem with the performance of the default “linux bridge” software, which can be overcome by use of OpenVSwitch. We covered how to install and configure OpenVSwitch on Proxmox. The process for other virtualization platforms will differ somewhat but may be quite similar. Implementation of OpenVSwitch can often address these performance issues with linux bridge. As this is something we’ve done for quite a few of our customers, we wanted to share this information with the broader community.
Источник