I don't think any brctl commands are needed. Neplan config handles all of that, so probably should delete any bridge created using bridge-utils. That's just a guess. I know I haven't used brctl in at least 8 yrs to create anything. Only use it to check on bridge config, perhaps once every 3 yrs, if there was an issue. In the olden days, around 2010, this stuff was all new to me.
The .yaml file looks reasonable. Would be helpful to see the current yaml file, the current ip a and the current ip r|column -t with each attempted modification. Also not if any generate or apply issues are reported. Might be useful to use the --debug option on both netplan commands too. Maybe something useful will come out? Who knows. If any other files in the netplan directory have yaml extensions, those could conflict. May also want to disable any other network management tools completely - like network-manager or wicd or ifupdown stuff.
Because I don't use netplan on my VM hosts, think I mentioned that previously, there isn't much more I can guess to be helpful. Sorry.
My pihole is connected to my normal bridge, as setup years ago. But my pihole isn't a VM, it is an LXD container. It just seemed like the way to play with an lxd container.
From inside the pihole, it looks like a normal network:
Code:
pihole:~$ ip r | column -t
default via 172.22.22.1 dev eth0 src 172.22.22.80 metric 206
172.22.22.0/24 dev eth0 proto kernel scope link src 172.22.22.80 metric 206
pihole:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:d5:71:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.22.22.80/24 brd 172.22.22.255 scope global eth0
valid_lft forever preferred_lft forever
lxd containers are treated more like VMs than containers.
From the VM host, it looks like:
Code:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 0c:9d:92:87:ce:13 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:9d:92:87:ce:13 brd ff:ff:ff:ff:ff:ff
inet 172.22.22.6/24 brd 172.22.22.255 scope global br0
valid_lft forever preferred_lft forever
$ ip r| column -t
default via 172.22.22.1 dev br0 onlink
172.22.22.0/24 dev br0 proto kernel scope link src 172.22.22.6
And the /etc/resolv.conf on the VM host is:
Code:
nameserver 172.22.22.80
And the bridges on the VM host:
Code:
$ brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.0c9d9287ce13 no enp3s0
vethCDK24M
vnet0
vnet1
vnet2
vnet3
vnet48
I think that's the entire, working, setup. I could post the interfaces file, but don't think that would be too helpful for a netplan need.
I do have an older physical system that isn't hosting any VMs right now. Suppose I could install netplan and try some stuff out. I need to think on that a bit, since there are some other NICs used on that system for LAN services that shouldn't be touched during the week. It does need to happen at some point before 2023.
Bookmarks