Create a new VM. If you make it Linux, then we can guide with commands to see the hardware presented. inxi -Fxxx is a start. Look that over. The motherboard, BIOS, CPU, GPU, networking, and disks are all virtual devices. The best performance comes when we choose "virtio" as the devices when that is possible. That goes for disks, NICs, and if your hypervisor is new enough, the GPU. Changing the CPU model is possible, but the only reason to do that would be in support of automatic failover to another hypervisor. For years, I emulated Core2Duo CPUs even when the main system had a Core i5 - that was purely for failover needs. It wouldn't do to have new instructions used on a CPU without those instructions. I'm setting up a Ryzen failover system in the next few weeks. But the CPUs are 2xxx and 5xxx series, so I need to ensure only 2xxx instructions are used.
VMs have a whole new world of skills. Running Windows inside VMs brings even more. I've never seen Win10/11. Don't like the license agreement, so it isn't allowed on the LAN. Not everyone can do that. But with older Windows licenses, swapping the motherboard was sufficient reason to require a new license and pre-installed licenses couldn't be moved, just retail. With retail licenses being moved, I've always had to speak with someone at MSFT to get the license approved on a new system. It was never automatic unless swapping a GPU or something trivial like that. vmvga, cirrus, qxl, and now we have virtvid as an option. My hypervisors are too old to have virt-io for the GPU, so I don't have any experience with it. Plus, I tend to access any graphic desktops over the network, not from the physical system running the hypervisor. For me, the network is the computer has been how I've done things 25 yrs and I'm not going to sit in the same room with a computer just for graphics.
An example of virtio for NICs .... iperf3 can show the maximum transfers possible:
Code:
$ iperf3 -c hadar
Connecting to host hadar, port 5201
[ 5] local 172.22.22.3 port 32786 connected to 172.22.22.6 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 3.01 GBytes 25.9 Gbits/sec 0 2.25 MBytes
[ 5] 1.00-2.00 sec 2.91 GBytes 25.0 Gbits/sec 0 2.25 MBytes
[ 5] 2.00-3.00 sec 3.29 GBytes 28.2 Gbits/sec 0 2.25 MBytes
[ 5] 3.00-4.00 sec 3.17 GBytes 27.2 Gbits/sec 0 2.25 MBytes
[ 5] 4.00-5.00 sec 3.31 GBytes 28.4 Gbits/sec 0 2.25 MBytes
[ 5] 5.00-6.00 sec 3.20 GBytes 27.4 Gbits/sec 0 2.25 MBytes
[ 5] 6.00-7.00 sec 3.15 GBytes 27.1 Gbits/sec 0 2.25 MBytes
[ 5] 7.00-8.00 sec 3.10 GBytes 26.6 Gbits/sec 0 2.62 MBytes
[ 5] 8.00-9.00 sec 2.80 GBytes 24.0 Gbits/sec 0 2.62 MBytes
[ 5] 9.00-10.00 sec 2.90 GBytes 24.9 Gbits/sec 0 2.75 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 30.8 GBytes 26.5 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 30.8 GBytes 26.5 Gbits/sec receiver
That test was between a VM and the hostOS, but it is the same for connections between 2 VMs on the same host too. 25 Gbps. Not bad, right? The physical system has some Intel GigE NICs, but not 25 Gbps worth. Of course, the same test between a VM and a different physical system drops back to whatever the physical NIC can handle ....
Code:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 270 sender
[ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
That's typical GigE connectivity.
Bookmarks