Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: Can more than 1 VM share a physical NIC?

  1. #11
    Join Date
    Sep 2007
    Beans
    285

    Re: Can more than 1 VM share a physical NIC?

    Is the MAC address physically embedded in the NIC? If I have 3 VMs sharing a NIC on bridge mode, will the MAC addresses of each VM be different? If yes, I presume it is the VM that generates the mac addresses for these Virtual NICs... If so, is it possible for us to define the mac address ourselves or to replicate the mac address of a previous VMs?

  2. #12
    Join Date
    Jan 2007
    Beans
    739
    Distro
    Ubuntu 18.04 Bionic Beaver

    Re: Can more than 1 VM share a physical NIC?

    In Virtualbox you can set the MAC address of each virtual NIC yourself, or let Virtualbox create a random one for you (it does this automatically when you create a new VM). If you clone a VM I don't know if it clones the MACs or not, you'd have to check, and if you need two VMs to have the same MAC address, set it yourself. You wouldn't be able to run both VMs in bridge mode at the same time since it would cause a conflict if they share a MAC address.

    A physical NIC has a fixed MAC address in the hardware but it can be overridden by the driver. And when you use bridge mode, the NIC will send and receive packets for all the MAC addresses assigned to the VMs and pass them along to the appropriate VM.
    Current 'buntu systems: Server 18.04.2 LTS, Mythbuntu 16.04 LTS, Ubuntu 16.04.1 LTS / Retired: 14.04 LTS, 10.04 LTS, 8.04 LTS
    Been using ubuntu since 6.04 (13 years!)

  3. #13
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    16,630
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Can more than 1 VM share a physical NIC?

    As others have said, yes, you can share a physical NIC, but there are security considerations when doing that for all hypervisors. Each guest has to opportunity to see the traffic from the other users of the NIC. If that is a concern, like it would be in a commercial hosting environment (for the clients, not the provider), then you'll want to use PCI passthru.

    For the types of hypervisor use we've been discussing in other threads, you should stay away from virtualbox and non-Server stuff that VMware offers/sells. For your needs, there are really 3 options. KVM, Xen, or ESXi+vsphere. If you are 100% microsoft company, then a case could be pushed that their server-Hyper-V stuff should be used.
    https://www.linux-kvm.org/page/Networking

    Also, never forget that if VLAN traffic isn't filtered from the wire, then it is just a suggestion and clients can access that traffic too. Lots of places use VLANs to run different networks over the same wires - voip + computers + admin access without realizing a nefarious computer can access all the traffic.

    With that said, I use a linux bridge for my VMs tied to 1 NIC on the public network side and the hostOS uses a different NIC for the traffic it gets on the admin network, which is NOT on the public interface/network and physically separate. If you run multiple ethernet cables to each location and plug them into the same plug in the wiring plan, it is pretty easy. Left plug is VoIP always. Right plug is for computing devices. Of course, then you need 2x the switch ports and trunk runs (probably), so there is a price for the added security. Worked at a place with 4 separate cable runs to each cube/office for a decade. They were serious about security.

    Network security design is where all computer security begins, IMHO.

  4. #14
    Join Date
    Sep 2007
    Beans
    285

    Re: Can more than 1 VM share a physical NIC?

    Quote Originally Posted by TheFu View Post
    As others have said, yes, you can share a physical NIC, but there are security considerations when doing that for all hypervisors. Each guest has to opportunity to see the traffic from the other users of the NIC. If that is a concern, like it would be in a commercial hosting environment (for the clients, not the provider), then you'll want to use PCI passthru.

    For the types of hypervisor use we've been discussing in other threads, you should stay away from virtualbox and non-Server stuff that VMware offers/sells. For your needs, there are really 3 options. KVM, Xen, or ESXi+vsphere. If you are 100% microsoft company, then a case could be pushed that their server-Hyper-V stuff should be used.
    https://www.linux-kvm.org/page/Networking

    Also, never forget that if VLAN traffic isn't filtered from the wire, then it is just a suggestion and clients can access that traffic too. Lots of places use VLANs to run different networks over the same wires - voip + computers + admin access without realizing a nefarious computer can access all the traffic.

    With that said, I use a linux bridge for my VMs tied to 1 NIC on the public network side and the hostOS uses a different NIC for the traffic it gets on the admin network, which is NOT on the public interface/network and physically separate. If you run multiple ethernet cables to each location and plug them into the same plug in the wiring plan, it is pretty easy. Left plug is VoIP always. Right plug is for computing devices. Of course, then you need 2x the switch ports and trunk runs (probably), so there is a price for the added security. Worked at a place with 4 separate cable runs to each cube/office for a decade. They were serious about security.

    Network security design is where all computer security begins, IMHO.
    OK. so the physical NIC is overwritten by the driver. That is so cool. So that's how it works. OK, Im putting all of these concepts into my design.
    You mentioned hypervisor. Do I need this if I won't be doing baremetal VMs? What exactly is it in my needs that you see, why I would need to stick with KVM, Xen, or ESXi+vsphere? I was actually considering virtualbox running on top of ubuntu until you mentioned this. I have to admit that Ive been reading posts on Virtualbox vs KVM, etc, and I was beginning to lean towards virtualbox or VMware player for ease of use and their ability to run on windows too (which means I can transfer the image to those OS's in the future).

  5. #15
    Join Date
    Mar 2009
    Beans
    1,961

    Re: Can more than 1 VM share a physical NIC?

    There's also SR-IOV, which stands for single-root I/O Virtualization. It's specific hardware support built into some network cards which (supposedly) handle some of the concerns about sharing a card in a virtualization environment.

  6. #16
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    16,630
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Can more than 1 VM share a physical NIC?

    Quote Originally Posted by webmiester View Post
    OK. so the physical NIC is overwritten by the driver. That is so cool. So that's how it works. OK, Im putting all of these concepts into my design.
    You mentioned hypervisor. Do I need this if I won't be doing baremetal VMs? What exactly is it in my needs that you see, why I would need to stick with KVM, Xen, or ESXi+vsphere? I was actually considering virtualbox running on top of ubuntu until you mentioned this. I have to admit that Ive been reading posts on Virtualbox vs KVM, etc, and I was beginning to lean towards virtualbox or VMware player for ease of use and their ability to run on windows too (which means I can transfer the image to those OS's in the future).
    Hypervisor is a generic term. KVM, Xen, Virtualbox, VMware Player, Workstation, ESXi ... are all hypervisors. The hypervisors I listed in the other post are stable, server, hypervisors. The others are for desktop-on-desktop VMs. They are also slower on modern hardware.

    virt-manager is a GUI to manage KVM, Xen, and other hypervisors. It looks like the virtualbox GUI or the VMware Workstation GUI or the VMware player GUI, but it provides more capabilities. Most of those capabilities aren't needed, but when you do need them, they are there, on a different tab. I haven't used anything except virt-manager to create and install guestOSs in at least 8 yrs. I have needed to tweak some advanced storage needs, like sheepdog, through virsh edit, but when doing n+1 replication, no GUI will handle that cleanly.

    Should also point out that virt-manager works remotely to manage multiple hypervisors. A minimal Ubuntu Server install with just the stuff needed to run that, remotely support it (ssh), secure it and back everything is fairly lite. virt-manager from a desktop connects via ssh to the libvirt stuff on the hypervisor for management, setup, tear down. No need to install any GUI **on** the hypervisor physical hardware.

    Years ago, there were claims being made that type-1 and type 2 hypervisors mattered for all sorts of reasons. There were great reasons behind these claims, but it turned out for most common workloads, the performance didn't care. For some workloads, virtualbox's non-HW accelerated software-only solution was faster than using the VT-x or AMD-v stuff.

    Look at the ESXi compatible hardware list. It might be limiting for some people. On my 5 physical systems, only 1 would work with ESXi. If you have expensive Dell/HP/IBM servers, current generation, then you are probably fine. Be certain to get the approved network cards and HBAs too. OTOH, KVM is part of the Linux kernel. It works on any linux compatible hardware, provided VT-x or AMD-v is supported - there are other factors, but, in general, those are it. If you don't have VT-x or AMD-v, you are out of the VM game already, except using qemu in software only mode. I don't think vbox x64 will run on hardware without it. I do believe that vbox x32 will, but that is crazy limiting for a VM host at this point. Spend the $100 and get a better computer.

    MS-Windows runs great under KVM, better than virtualbox if you aren't doing GPU intensive stuff.

    Did I mention F/LOSS?
    Last edited by TheFu; 1 Week Ago at 08:34 PM.

  7. #17
    Join Date
    Mar 2010
    Location
    Squidbilly-Land
    Beans
    16,630
    Distro
    Ubuntu Mate 16.04 Xenial Xerus

    Re: Can more than 1 VM share a physical NIC?

    One thing needs to be clear. Only install 1 hypervisor on a physical computer at a time. They will conflict almost always.

    But you should try each one out for at least a day before deciding anything. Gather some facts. Use a stopwatch for workload comparisons. See for yourself, which performs better with your current level of skill.

    Then after you pick 2, try those for a week. Run them like you would in production. See which is stable, fast, doesn't cause issues. If you can, try a version upgrade with those 2 hypervisors. See which is less pain. Does the upgrade break things? What hassles, if any, are there?

    You don't need to commit to 1 most of the time - except VMware's paid solutions. Once you drop $2-$5K for licenses, you are sorta stuck.

  8. #18
    Join Date
    Mar 2009
    Beans
    1,961

    Re: Can more than 1 VM share a physical NIC?

    The theory behind a hypervisor is that it's a minimalist implementation of virtualization software, and nothing else. It's the fewest features your virtual machines need from a host. There is an OS of sorts, but it's strictly for virtualization. It's a benefit for disk space, complication, ease of maintenance and security.

    KVM is built into the Linux kernel, and so you could run KVM on a full desktop system and it would be a sort of "thick" hypervisor. I like to use jelly and toast as an analogy, where the toast is your physical hardware and the jelly is the amount of stuff you have installed on your hypervisor. A strict hypervisor (VMware, etc) is a thin layer of jelly on the toast, but still enough to taste it. KVM lets you spread the amount of jelly you want, according to your taste. My 3 year old would put the whole jar of jelly on one piece of toast, and this would be a typical full-function desktop using KVM/QEMU.

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •