Re: Windows to Ubuntu migration (with windows VM)
That "simple" guide is about 10 pages of technical stuff - any wrong step and it won't work. Swapping out the supported kernel? Really? Also, who has multiple $250/ea GPUs in their system? I doubt that 710 GT will work, but I don't know.
OTOH, perhaps it is simple for some people.
Quote:
btw, Since the motherboard I'll be using with the VM is the same as now, wouldn't it automatically activate? I don't know if Windows' motherboard detection behaves differently under VM. Btw, I'm running Windows 10 already.
The real hardware isn't seen under a VM, unless that specific component is passed through using IOMMU. You cannot pass the entire motherboard thru - sorry.
Re: Windows to Ubuntu migration (with windows VM)
Quote:
Originally Posted by
TheFu
The real hardware isn't seen under a VM, unless that specific component is passed through using IOMMU. You cannot pass the entire motherboard thru - sorry.
...and this is why I started this thread. I was unaware of that. I'll attempt the microsoft sign-in method first before exploring other options. Assuming Windows hasn't the ability to detect the VM, it should just treat the "new" mobo as a hardware change.
Re: Windows to Ubuntu migration (with windows VM)
Create a new VM. If you make it Linux, then we can guide with commands to see the hardware presented. inxi -Fxxx is a start. Look that over. The motherboard, BIOS, CPU, GPU, networking, and disks are all virtual devices. The best performance comes when we choose "virtio" as the devices when that is possible. That goes for disks, NICs, and if your hypervisor is new enough, the GPU. Changing the CPU model is possible, but the only reason to do that would be in support of automatic failover to another hypervisor. For years, I emulated Core2Duo CPUs even when the main system had a Core i5 - that was purely for failover needs. It wouldn't do to have new instructions used on a CPU without those instructions. I'm setting up a Ryzen failover system in the next few weeks. But the CPUs are 2xxx and 5xxx series, so I need to ensure only 2xxx instructions are used.
VMs have a whole new world of skills. Running Windows inside VMs brings even more. I've never seen Win10/11. Don't like the license agreement, so it isn't allowed on the LAN. Not everyone can do that. But with older Windows licenses, swapping the motherboard was sufficient reason to require a new license and pre-installed licenses couldn't be moved, just retail. With retail licenses being moved, I've always had to speak with someone at MSFT to get the license approved on a new system. It was never automatic unless swapping a GPU or something trivial like that. vmvga, cirrus, qxl, and now we have virtvid as an option. My hypervisors are too old to have virt-io for the GPU, so I don't have any experience with it. Plus, I tend to access any graphic desktops over the network, not from the physical system running the hypervisor. For me, the network is the computer has been how I've done things 25 yrs and I'm not going to sit in the same room with a computer just for graphics.
An example of virtio for NICs .... iperf3 can show the maximum transfers possible:
Code:
$ iperf3 -c hadar
Connecting to host hadar, port 5201
[ 5] local 172.22.22.3 port 32786 connected to 172.22.22.6 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 3.01 GBytes 25.9 Gbits/sec 0 2.25 MBytes
[ 5] 1.00-2.00 sec 2.91 GBytes 25.0 Gbits/sec 0 2.25 MBytes
[ 5] 2.00-3.00 sec 3.29 GBytes 28.2 Gbits/sec 0 2.25 MBytes
[ 5] 3.00-4.00 sec 3.17 GBytes 27.2 Gbits/sec 0 2.25 MBytes
[ 5] 4.00-5.00 sec 3.31 GBytes 28.4 Gbits/sec 0 2.25 MBytes
[ 5] 5.00-6.00 sec 3.20 GBytes 27.4 Gbits/sec 0 2.25 MBytes
[ 5] 6.00-7.00 sec 3.15 GBytes 27.1 Gbits/sec 0 2.25 MBytes
[ 5] 7.00-8.00 sec 3.10 GBytes 26.6 Gbits/sec 0 2.62 MBytes
[ 5] 8.00-9.00 sec 2.80 GBytes 24.0 Gbits/sec 0 2.62 MBytes
[ 5] 9.00-10.00 sec 2.90 GBytes 24.9 Gbits/sec 0 2.75 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 30.8 GBytes 26.5 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 30.8 GBytes 26.5 Gbits/sec receiver
That test was between a VM and the hostOS, but it is the same for connections between 2 VMs on the same host too. 25 Gbps. Not bad, right? The physical system has some Intel GigE NICs, but not 25 Gbps worth. Of course, the same test between a VM and a different physical system drops back to whatever the physical NIC can handle ....
Code:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 270 sender
[ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
That's typical GigE connectivity.
Re: Windows to Ubuntu migration (with windows VM)
Quote:
Originally Posted by
TheFu
That "simple" guide is about 10 pages of technical stuff - any wrong step and it won't work. Swapping out the supported kernel? Really? Also, who has multiple $250/ea GPUs in their system? I doubt that 710 GT will work, but I don't know.
OTOH, perhaps it is simple for some people.
The real hardware isn't seen under a VM, unless that specific component is passed through using IOMMU. You cannot pass the entire motherboard thru - sorry.
It all works for a default Ubuntu install. I did the kernel of my own volition. However if the hardware can't do it then you are screwed. I also did it with an RX570 to the vm and the built in motherboard graphics to the host. Of that whole guide all I had to do was make sure the cpu supported iommu, and enable it + the vfio-pci.ids in the /etc/default/grub. Didn't even bother to look at the other pages. Quite simple.
Re: Windows to Ubuntu migration (with windows VM)
Quote:
Originally Posted by
Tadaen_Sylvermane
... I also did it with an RX570 to the vm and the built in motherboard graphics to the host. Of that whole guide all I had to do was make sure the cpu supported iommu, and enable it + the vfio-pci.ids in the /etc/default/grub. Didn't even bother to look at the other pages. Quite simple.
That's interesting. I have an AM4 board that has no IGP; however, I have a GT 710 and an AMD RX Vega 56. I'm guessing I'd reserve the Vega for the VM and the 710 for the Host...but would that be permanent? Would I have the ability to use the Vega with Ubuntu when I'm not running the VM?
Re: Windows to Ubuntu migration (with windows VM)
GPU passthrough is a bit of a meh. Win 10 removed remotefx due to security issues except for maybe win 10 pro. dgpu is something that may work. Virtualbox sort of supports GPU passthrough, but the support is better for Linux and no surprise hyper v gpu passthrough is better for Windows. You might be just have to be a bit creative or and keep a physical box around for when you need whatever the windows GPU power is needed for.
Personally stay away from that side of things. There are plenty of companies running virtualized GPUS in servers for end users so it must be possible.
For the migrate Macrium reflect free is a good option as it will take an image of your drive only the making it to the size of the installed data. You can then create a backup disk to boot into via your vm and then pull the image in over a mounted drive or network share.
Re: Windows to Ubuntu migration (with windows VM)
I apologize if my last reply was a bit rude or condescending. Bad day. At any rate it is simple for a person who's been rolling linux / open source for awhile. If not then it can be a risk, yes. Make sure to dot your i's and cross your t's so to speak. The guide must be used in generalizations, not specific hardware as they suggest. If your hardware is capable, then it will work most likely.
I should say though that recently I did try again with the vm using a passed through ssd and the hugepages thing (needs to be enabled outside of the guide) and the performance was better however it did not match a bare metal run. In my case with World of Warcraft I noticed a good 20-30 fps lower performance in the latest expac main hub vs bare metal Windows performance. Still 40+ but when you are used to 70-90 on Windows it's a big loss.
My apologies again.