Hi

Im trying to setup GPU passthrough from my 16.04 box to a Windows VM, which "technically" is working. I have used a combination of guides to do this, primarily this one.

Currently the Windows VM is outputting video via the passed through GPU (ATI R9 290XO) my problem is within Windows, when I try to install the AMD drivers for the card, the VM will crash and reboot. Im assuming Windows blue screens but I can't actually see because at the time of the crash the screen goes black (im assuming this is to do with Windows removing its default drivers of the card at the time of the crash).

I know there isn't any issue with the drivers as I used to use them without issue when I was running Windows natively. I have also tried reinstalling different versions of Windows within the VM just incase Windows is being "spazzy". This makes me think that the problem is actually the method I am using to pass my GPU through.

Bellow is a copy of my script im using to start the VM:

Code:
#!/bin/bash

configfile=/etc/gaming-vm-pci.cfg

vfiobind() {
    dev="$1"
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

modprobe vfio-pci

cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
        vfiobind $line
done

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 6114 -cpu host \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/seabios/bios.bin \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \
-usb -usbdevice host:1b1c:1b09 \
-usb -usbdevice host:04d9:a067 \
-usb -usbdevice host:1e7d:3750 \
-drive file=/var/lib/libvirt/images/Gaming-os.img,id=disk-os,format=raw,if=none,cache=none -device ide-hd,bus=ide.0,drive=disk-os \
-drive file=/data/virt/Gaming-data.img,id=disk-data,format=raw,if=none,cache=none -device ide-hd,bus=ide.1,drive=disk-data \
-device e1000,netdev=tunnel -netdev tap,id=tunnel,ifname=vnet0 \
-vga none

exit 0
I have read about NVIDIA actually using VM detection within there drivers and causing code 43 errors within Windows is the GPU is present in a VM, however I haven't been able to find much info about this problem on AMD cards.

I have also tried modifying my start script to this:

Code:
#!/bin/bash

configfile=/etc/gaming-vm-pci.cfg

vfiobind() {
    dev="$1"
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

modprobe vfio-pci

cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
        vfiobind $line
done

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 6114 -cpu host,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-bios /usr/share/seabios/bios.bin \
-device vfio-pci,host=01:00.0,x-vga=on \
-device vfio-pci,host=01:00.1 \
-usb -usbdevice host:1b1c:1b09 \
-usb -usbdevice host:04d9:a067 \
-usb -usbdevice host:1e7d:3750 \
-drive file=/var/lib/libvirt/images/Gaming-os.img,id=disk-os,format=raw,if=none,cache=none -device ide-hd,bus=ide.0,drive=disk-os \
-drive file=/data/virt/Gaming-data.img,id=disk-data,format=raw,if=none,cache=none -device ide-hd,bus=ide.1,drive=disk-data \
-device e1000,netdev=tunnel -netdev tap,id=tunnel,ifname=vnet0 \
-vga none

exit 0
This time when I attempt to install the AMD drivers Windows doesn't crash but I do however loose all output from the card / Windows and am just presented with a black screen.

Can anyone see something I'm not?

Also what does the "kvm=off" option I added actually do?


Thanks
Dan