PDA

View Full Version : [ubuntu] 802.3ad Network Bonding & KVM Network Bridge



NaughtyusMaximus
June 20th, 2008, 11:12 PM
Hi all,
I'm trying to set up an Ubuntu KVM server and have run into some problems. On the server, I have 6 network interfaces total, and I'd like to have at least four of them bonded (as in the 802.3.ad spec).

I've followed the guide here, and that seems to work just fine: https://help.ubuntu.com/community/UbuntuBonding

My /etc/network/interfaces file looks like this:

auto bond0
iface bond0 inet dhcp
pre-up modprobe bonding
up ifenslave bond0 eth2 eth3 eth4 eth5
pre-down ifenslave bond0 -d eth2 eth3 eth4 eth5
post-down rmmod bonding


That seems to work ok on its own. Where I run into problems is when I try to create a KVM bridge, and add the following code to my network/interfaces file:


auto br0
iface br0 inet dhcp
bridge_ports bond0
bridge_stp off
bridge_maxwait 5

When I start networking on the machine, the br0 interface will end up with an IP on my network, and all of the others show as not having an IP, which I believe is correct. Networking will work on the machine for a couple of minutes, and then after that I'll not be able to even ping anywhere on the network. Am I doing something obviously wrong and stupid here?

NaughtyusMaximus
June 23rd, 2008, 05:09 PM
Any thoughts?

ptitrene
September 24th, 2008, 10:25 AM
Hello,

Have you managed to find a way to solve your problem? I'm in the same situation ...

zahadum
November 29th, 2008, 05:35 PM
May be too late for you, but I solved it in Ubuntu 8.10 this way:

/etc/network/interfaces:


auto bond0
iface bond0 inet manual
up ifenslave bond0 eth0 eth1
pre-down ifdown br0
down ifenslave -d bond0 eth0 eth1


auto br0
iface br0 inet static
address 10.10.196.27
netmask 255.255.255.0
network 10.10.196.0
broadcast 10.10.196.255
gateway 10.10.196.254
dns-nameservers 10.10.196.16 10.10.196.12
dns-search domain.local
bridge_ports bond0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off


In /etc/modprobe.d/bonding I have:


alias bond0 bonding
options bond0 mode=802.3ad miimon=100 max_bonds=2 primary=eth0 updelay=500 downdelay=500


(For mode=802.3ad the switch must have support for this mode and it has to be enabled on the switch)

regards,
Zahadum

NaughtyusMaximus
November 29th, 2008, 11:07 PM
Thanks very much for posting this!

GlobeFr
December 16th, 2008, 11:55 AM
Thx for the config zahadum,

Unfortunately, it doesn't work with my Hardy distro...

When I look at the tcpdump -i bond0 output, I can see that the bond is working..
The bridge work well too but when I start my VM via virsh (libvirt), the VM has access to the network for max 15s.
After that, I see only : LACPv1, length: 110 on the output of the tcpdump.

Zahadum do you use a specific network script to start your VM or attach TAP devide to the bridge?

Thx for you help

Dr.Dran
January 5th, 2009, 05:13 PM
Hi guys!

I have a stange problem with my ubuntu server 8.10:

I've applied the configuration in /etc/network/interfaces like
zahadum suggest, and if I restart the network interface with


/etc/init.d/network restart

Al the interfaces goes up, but if I restart the system after the Grub boot, the kernel goes in panik!!!

Have u ever experienced an analog problem? There isn't an HW problem like memory misconfiguration or bug becouse i've tested it with a debug program...

Best regards

Franco Tampieri

P.S. I've updated the kernel to the 2.6.27-9

Schorschi
April 12th, 2009, 08:14 PM
I have the same issue, if setup bridge and bond and then link them where the br0 is using bond0 as port? The box crashes randomly ending in a trace dump. The box is crashed. If I reboot the box, it is unusable, ends in a trace dump. I had to completely re-install everything.

The ugly part, is that this happens on 8.10 and 9.04 beta! Yes, both crash as noted above. I can not see seem to figure out how to get bond as a bridge, this is not good, we must have both to meet our virtualization needs. We need to recreate the behavior that VMware ESX does out of the box... bond that is active/active and bridges to physical network adapters. It is my understanding that bridge_ports eth<x< eth<y> under a defined bridge is not a bond, not configurable, since all traffic goes to first port if up?

Anyone figure out how to setup bond as bridge that virt-manager will honor? Thanks in advance!

Schorschi
April 15th, 2009, 06:59 AM
The following is almost exactly what I see in Ubuntu when I establish a bond, and then try to use the bond as part of a bridge.

http://bugs.centos.org/view.php?id=3095

jinzishuai
June 2nd, 2009, 07:35 PM
I have almost exactly the same problems on Jaunty Server.
The bridge and bonding work separately very well but when I get them together I get kernel dump and system crash.

jinzishuai
July 27th, 2009, 01:51 AM
I finally got it working with the /etc/network/interface file as


auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond-mode 0
bond-miimon 100

auto br0
iface br0 inet static
address 192.168.1.23
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 192.168.1.1
dns-search example.com
bridge_ports bond0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

brutimus
January 29th, 2010, 07:56 PM
I replicated what jinzishuai posted and it fixed my problems! The solution appears to have been with the bridge_fd, bridge_hello, and bridge_maxage values. Thanks!

Here is my /etc/network/interfaces


auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond-mode 4
bond-miimon 100

auto virbr0
iface virbr0 inet static
address 192.168.3.182
netmask 255.255.255.0
network 192.168.3.0
broadcast 192.168.3.255
gateway 192.168.3.1
dns-nameservers 192.168.3.1
dns-search gsi
bridge_ports bond0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

SpiderNetUK
February 8th, 2010, 12:52 PM
Thanks, this has helped me get my interfaces bonded and bridge so the VM's ca nbe networked. Trouble is I'm seeing a lot of kernel error messages in the syslog


Feb 8 11:46:13 PUBLIC5 kernel: [268889.602543] bond0: received packet with own address as source address
Feb 8 11:46:14 PUBLIC5 kernel: [268890.600951] bond0: received packet with own address as source address
Feb 8 11:46:15 PUBLIC5 kernel: [268891.599291] bond0: received packet with own address as source address
Feb 8 11:48:13 PUBLIC5 kernel: [269009.401908] bond0: received packet with own address as source address
Feb 8 11:48:14 PUBLIC5 kernel: [269010.400340] bond0: received packet with own address as source address


This results in the VM being unresponsive and not replying to ping requests, it happens every 2mins.

Anyone else had this? Thinking it might eb something to do with broadcasts.

SpiderNetUK
March 17th, 2010, 03:46 PM
Thought I'd just bump this thread...

In the end I had to disable the bond, the bridge works fine this way.

chienpo
June 14th, 2010, 11:21 PM
While doing some searching I found this thread. I am also trying to set up a server to host virtual guests. This server has 4 network interfaces that I want to bond together for redundancy/availability (and bandwidth). I then want to set up bridging on the bonded interface so I can use this for my virtual guests, allowing them to appear as hosts on the same subnet as my server.

The difference between me and earlier posters appears to be that I'm running a newer version of Ubuntu Server: 10.04 (lucid lynx).

Anyhow I first set up my bonded interface successfully using the following /etc/network/interfaces:



# ...
# The bonded network interface
auto bond0
iface bond0 inet static
address 192.168.100.102
netmask 255.255.255.0
gateway 192.168.100.1
bond-slaves none
bond-mode 802.3ad
bond-miimon 100

auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

auto eth3
iface eth3 inet manual
bond-master bond0


The bonding alone works great. However, I can't get both bonding and bridging to work together. I've tried the configurations posted earlier that worked for others. I've also tried several variations of my own, essentially stabbing in the dark for solutions. My most recent attempt looked like the following:



# ...
# The bonded network interface
auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode 802.3ad
bond-miimon 100

auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

auto eth2
iface eth2 inet manual
bond-master bond0

auto eth3
iface eth3 inet manual
bond-master bond0

# The bridged network interface
auto br0
iface br0 inet static
address 192.168.100.102
netmask 255.255.255.0
gateway 192.168.100.1
bridge_ports bond0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off


Some of the permutations I've attempted include reordering the above stanzas in my /etc/network/interfaces file, as well as variations that look more like versions people have posted (usually for older versions of ubuntu).

Has anyone out there been able to successfully get bonding and bridging to work together on Ubuntu Server 10.04 (lucid lynx) in the manner that I am? Does anyone otherwise have any suggestions or pointers?

Uthark
June 25th, 2010, 03:59 AM
Has anyone out there been able to successfully get bonding and bridging to work together on Ubuntu Server 10.04 (lucid lynx) in the manner that I am? Does anyone otherwise have any suggestions or pointers?
I have Ubuntu Server 10.04 too, and I've managed to get some results.

I've set up bonding as usual, without using bond-master and bond-slave:


auto bond0
iface bond0 inet static
address 0.0.0.0
netmask 0.0.0.0
bond_mode balance-rr
bond_miimon 100
bond_downdelay 200
bond_updelay 200
slaves eth0 eth1


And the bridge as:


auto br0
iface br0 inet static
address 192.168.1.23
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
bridge_ports bond0
bridge_stp on

My current issue is trying to connect br0 to KVM guests

Uthark
June 27th, 2010, 06:07 PM
After 12 hours straight, i've got this setup working for now:

First, enable bonding module, by appending it at /etc/modules:



# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

loop
lp
rtc
bonding


Then edit /etc/network/interfaces:


# The loopback network interface
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode balance-alb
bond-miimon 100

auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1

auto eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1

auto vnet0
iface vnet0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1
bridge-ports bond0

Reboot and confirm that:


Your slave interfaces (eth0 and eth1 in my case) are UP, RUNNING and in SLAVE mode (check with ifconfig ethX)

root@kvmhost:~# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:30:48:f0:1b:ef
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:19 errors:0 dropped:0 overruns:0 frame:0
TX packets:188 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:1372 (1.3 KB) TX bytes:11311 (11.3 KB)
Memory:fafe0000-fb000000

root@kvmhost:~# ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:30:48:f0:1b:ee
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:65 errors:0 dropped:0 overruns:0 frame:0
TX packets:227 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:7810 (7.8 KB) TX bytes:18041 (18.0 KB)
Memory:faf20000-faf40000
Check if your bond (bond0 in my case) is UP, RUNNING, in MASTER mode (check with ifconfig bondX) and has no IP assigned.

root@kvmhost:~# ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:30:48:f0:1b:ef
inet6 addr: fe80::230:48ff:fef0:1bef/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:145 errors:0 dropped:0 overruns:0 frame:0
TX packets:948 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16408 (16.4 KB) TX bytes:62374 (62.3 KB)

Check if your bridge (vnet0 in my case) is UP, RUNNING and has the IP you've set (192.168.1.2 in my case):

root@kvmhost:~# ifconfig vnet0
vnet0 Link encap:Ethernet HWaddr 00:30:48:f0:1b:ef
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::230:48ff:fef0:1bef/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:144 errors:0 dropped:0 overruns:0 frame:0
TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:14788 (14.7 KB) TX bytes:11344 (11.3 KB)

Check if the bond is attached to the bridge. brctl show should show something like:

root@kvmhost:~# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.000000000000 yes
vnet0 8000.003048f01bef no bond0

Check your ip routing table:

root@kvmhost:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.0 * 255.255.255.0 U 0 0 0 vnet0
192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0
default 192.168.1.1 0.0.0.0 UG 100 0 0 vnet0

Check your KVM VM's XML definitions (listed at /etc/libvirt/qemu):

<interface type='bridge'>
<mac address='52:54:00:0a:7b:f7'/>
<source bridge='vnet0'/>
<model type='virtio'/>
</interface>


I really hope all this hassle can be avoided, maybe by compiling netcf support in libvirt, so physical network interfaces can be detected and configured from virsh or virt-manager.

Don't forget to adjust bond-mode to your favorite bonding method!

Nuesel
January 20th, 2011, 11:08 AM
Hello,
Uthark, what packages do have installed to get the bonding/bridge running? Probably ifenslave and bridge-utils, something else what I need here?

Best regards
Nuesel

Uthark
January 20th, 2011, 11:41 PM
Hello,
Uthark, what packages do have installed to get the bonding/bridge running? Probably ifenslave and bridge-utils, something else what I need here?

Best regards
Nuesel
You're right, just those two


aptitude install ifenslave bridge-utils

Nuesel
January 21st, 2011, 10:51 AM
Thanks for the answer.
I belive I've found the problem:
I used Ubuntu 10.10, fresh install and all update up to now. On this system your instruction doesn't work. The error message said something about an illegal argument.
But when I used ifenslave-2.6_1.1.0-14ubuntu2_amd64 (in my case 64bit) from Ubuntu 10.04 (instead of Ubuntu 10.10 package ifenslave-2.6 version 1.1.0-15) it seems to work. At least I checked all the things you mentioned and it seems good.
But if run dmesg I see alot of entries like:
vnet0: received packet on bond0 with own address as source address
similar as SpiderNetUK mentioned.
I will keep trying...

Nuesel
January 21st, 2011, 11:43 AM
Hello again,
I added the following bridge options in /etc/network/interfaces:

brigde_fd 0
bridge_stp off # switch on with more system like this
bridge_maxage 0
bridge_ageing 0
bridge_maxwait 0

I have found these on:
http://newyork.ubuntuforums.org/showthread.php?t=1565074

I checked on the manual, but honestly I do not understand if the packages, that are sent to itself, are suppressed (so they do not "launch"), or if just the receive of these packages is suppressed??

TooMeeK
May 18th, 2011, 03:20 PM
Hello,
I see I'm not alone :)

I'm trying to increase througput between 2 servers using bonding for testing purposes. I used it before (and I do not recommend HP V1910: round-robin doesn't work on them, try 3Com 3824 instead)
Server 1 - iSCSI host (FreeNAS)
Fujitsu-Siemens RX331 S1 with two Quad-Core Opterons (system sees 2,1GHz x 8), 4GB RAM, 2 x SAS 146GB in HW RAID0, 2 x SATA 250GB in HW RAID0. Two 1Gbit Broadcom onboard cards with Jumbo Frames support.
Server 2 - KVM Host, Ubuntu Server 10.04
just desktop equipment with single CPU 4 x 2,7GHz Athlon II, 6GB DDR2, 4 x 500GB RAID10 for VM storage, 1 x 500GB for OS/swap. Two Realtek 1Gbit without Jumbo frames: one onboard and second PCI-Express.

Now:
iSCSI storage is simple: uses round-robin policy for both Broadcoms. Working fine so far (just booted over PXE). It is connedted directly to VM using virt-manager and iSCSI storage type, but iSCSI to VM works fine also (but slower, I think).
500MB/s read, 30MB/s write. That says Atto on iSCSI volume.
On Ubuntu I have bridged interface (br0) with eth0 used to expose VMs to the network. Working fine, uptime was 35 days before my tests.
And now I tried round-robin bonding mode. Works fine without bridge (iperf/nuttcp says ~1,8Gbits between 2 Linux servers) but with bridge servers seems to not responding few minutes after start and is lagging/dropping packets under heavy load (no error reports in ifconfig information).

Is this a known issue about ARP? How to solve this?
I'm wondering can this be caused by too much interrupts...

My working (lagging) config on Ubuntu:
/etc/network/interfaces

auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode 0
bond-miimon 100

auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1

auto eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1

auto br0
iface br0 inet static
address 192.168.0.2
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
gateway 192.168.0.1
# dns-* options are implemented by the resolvconf package, if
# installed
dns-nameservers 192.168.0.1
dns-search domain.com.pl
bridge-ports bond0

/etc/modprobe.d/bonding.conf

alias bond0 bonding
options bonding mode=0 miimon=100 downdelay=200 updelay=200