I have used Windows as my primary OS ever since I was a teenager because I am a gamer. Ironically, my gaming hobby also introduced me to my programming/Linux hobby (and now career) which are always at odds with one another. I can either game, which has an unfortunate requirement of needing Windows, or hack, which usually requires Linux. This meant I had to constantly dual boot between Windows and Linux or suffer with a virtual machine with poor video performance. Not ideal.

However, a few years ago the Linux Kernel Virtual Machine (KVM) project enabled near native Windows guest performance by passing through a host video card to a guest. I was extremely interested in trying this out but was deterred by a lack of proof that SLI or CrossFire worked since my other, other hobby is also being a hardware enthusiast.

All that changed when I discovered a post authored by Duelist detailing that he had successfully got his XDMA Radeons to run in CrossFire! I gave it a try it myself and found that documentation was sparse and mostly geared towards Arch Linux. I have a preference for Ubuntu and couldn’t really find anything modern that would help me. All in all, I was successful and am extremely happy with the performance of my box. Hopefully this guide will also help others that have multiple Radeons that they wish to CrossFire!


Note: Large parts of this guide have been pieced together from various tutorials and user posts from around the web. The Puget Systems Ubuntu 14.04 + KVM guide is a great Ubuntu starting point when paired with QEMU-KVM on Arch Linux Guide. If you get stuck the Arch Linux PCI passthrough via OVMF wiki, the Arch Linux forums, or the /r/VFIO wiki are other valuable references.

System Requirements

  • CPU VT-d support
  • Motherboard can enable VT-d support
  • 2+ Radeon eGPUs with XDMA CrossFire and UEFI BIOS support
  • Intel iGPU or a third eGPU

BIOS

Enable VT-d virtualization in your BIOS and enable your integrated GPU.

Ubuntu Setup

Install Ubuntu 16.10 or 17.04 if you want virt-manager because it comes with a more modern libvirt 2.1 than Ubuntu 16.04 LTS.

After you have installed Ubuntu you will now have the stock kernel but it won’t be virtualization aware. We will need to enable the VFIO (Virtual Finction I/O) kernel modules to allow us to pass full devices to the guest machine.

sudo gedit /etc/modules

Add the following to the bottom of the file:

# Modules required for VFIO
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel 

Next we will need to enable Intel IOMMU via the boot loader:

sudo gedit /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1"
sudo update-grub

Now we will need to identify the PCI-Bus IDs of the hardware devices that we wish to pass through. In this case I’m passing through my R9 290s and their HDMI audio devices which have PCI-Bus IDs 1002:67b1 and 1002:aac8.

lspci -nn | grep AMD
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290/390] [1002:67b1]
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [1002:aac8]
02:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290/390] [1002:67b1]
02:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [1002:aac8]

Add this to a special vfio.conf that will be run when the vfio-pci module is started:

sudo gedit /etc/modprobe.d/vfio.conf

Add the following to the file replacing you PCI-Bus IDs as needed:

# Ensure that the vfio-pci module gets loaded before any video drivers
softdep radeon pre: vfio-pci
softdep amdgpu pre: vfio-pci
softdep nouveau pre: vfio-pci
softdep drm pre: vfio-pci

# vfio-pci driver will be loaded onto the following PCI-Bus IDs
options vfio-pci ids=1002:67b1,1002:aac8

Blacklist the opensource radeon driver from starting:

sudo gedit /etc/modprobe.d/blacklist.conf

Add the following to the file

# Drivers for devices that are passed to VFIO
radeon

Reboot your machine and verify that the video cards have been stubbed out via vfio-pci.

dmesg | grep vfio
[    7.206251] vfio_pci: add [1002:67b1[ffff:ffff]] class 0x000000/00000000
[    7.250264] vfio_pci: add [1002:aac8[ffff:ffff]] class 0x000000/00000000

QEMU Setup

Next install the Open Virtual Machine Firmware (OVMF) and QEMU so that we can run a virtual machine.

sudo apt-get install ovmf qemu-kvm

The OVMF is a project to enable UEFI support for virtual machines. It’s a BIOS that is needed to boot our Windows guest and it needs a variable store partition that we’re going to now create:

sudo cp /usr/share/OVMF/OVMF_VARS.fd /var/lib/libvirt/qemu/nvram/windows_VARS.fd

Next up, create your Windows image for the guest to use. If you’re interested in why these settings are used this should help clear things up.

sudo qemu-img create -f qcow2 -o preallocation=falloc /var/lib/libvirt/images/windows10.qcow2 80G

QEMU Test Drive

Try running the VM using just QEMU after you alter the necessary fields to your machine’s specification. This will be the number of cores/threads, amount of memory, or paths to the installation media. I wouldn’t suggest you change too much otherwise. Also, please note that Windows 10 will BSOD unless qemu emulates a core2duo class processor. 

In this example I have also mounted the Windows virtio drivers which you should download beforehand.

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 8192 -cpu core2duo,+nx,kvm=off \
-smp 4,sockets=1,cores=4,threads=1 \
-object iothread,id=iothread1 -object iothread,id=iothread2 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/var/lib/libvirt/qemu/nvram/windows_VARS.fd \
-drive if=virtio,format=qcow2,file=/var/lib/libvirt/images/windows10.qcow2,cache=writeback \
-device ioh3420,port=0xe0,chassis=1,id=pcie.1,bus=pcie.0,multifunction=on,addr=0x1c \
-device ioh3420,port=0x18,chassis=2,id=pcie.2,bus=pcie.0,multifunction=on,addr=0x3 \
-device vfio-pci,host=01:00.0,bus=pcie.1,addr=00.0,multifunction=on \
-device vfio-pci,host=01:00.1,bus=pcie.1,addr=00.1 \
-device vfio-pci,host=02:00.0,bus=pcie.2,addr=00.0,multifunction=on \
-device vfio-pci,host=02:00.1,bus=pcie.2,addr=00.1 \
-boot menu=on \
-parallel null \
-serial null \
-vga qxl \
-rtc base=localtime,clock=host \
-drive file=/path/to/virtio-win-0.1.126.iso,index=3,media=cdrom \
-cdrom /path/to/windows10.iso

If you look at the bold configuration above, the trick to getting CrossFire working is to create a unique PCI-e root port switch per GPU (i.e. ioh3420).

If you were able to run this command then you should have seen a Tiano UEFI boot screen on your Radeons. If so, you succesfully got to the first step in GPU passthrough. Unfortunately this doesn’t mean things will work right off away. You will need the QXL emulated video card as a substitute until everything works.

Install Windows on the Guest using QXL Video

I had to set up my guest at this stage because there is a bug that prevents Windows 7 guests from booting QEMU KVMs on libvirt. I installed Windows 7 first (and then upgraded to 8 and then 10) because I don’t own a Windows 10 retail copy and can still upgrade to 10 for free with a valid version of Windows 7 or 8. You might want to skip this stage and go on directly on to libvirt section to avoid having to re-activate your OS (it seems libvirt changes something that Windows’ activate system doesn’t like).

Fully install Windows 10 and get it working before you install the Radeon drivers. It just worked for me after this. If you run into weird GPU issue a reboot will usually fix the problem. Please note that I was also unable to get CrossFire working in Windows 7 so its likely that only Windows 8/10 are supported.

(Optional: 16.10 and above only) libvirt and virt-manager Setup

image

Now we’re going to get this working with libvirt and virt-manager in order to allow our VM to autostart on boot.

We’re going to roughly follow the Ubuntu KVM guide. Additional information on virt can be found on the Ubuntu KVM Walkthrough.

sudo apt-get install virt-manager ubuntu-vm-builder bridge-utils
sudo adduser `id -un` libvirtd

Next up we’re going to create a file for inclusion as a virt domain.

sudo gedit /etc/libvirt/qemu/windows10.xml

Alter the following XML similar to what you did above for QEMU and paste it in. The necessary bits to load the unique PCI-e root port switches have been added in bold.

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
 <name>windows10</name>
 <uuid>67f3226e-8303-4437-8c20-b8b03d301a77</uuid>
 <memory unit='GiB'>8</memory>
 <currentMemory unit='GiB'>8</currentMemory>
 <vcpu placement='static'>4</vcpu>
 <iothreads>2</iothreads>
 <cputune>
   <vcpupin vcpu='0' cpuset='0'/>
   <vcpupin vcpu='1' cpuset='1'/>
   <vcpupin vcpu='2' cpuset='2'/>
   <vcpupin vcpu='3' cpuset='3'/>
   <emulatorpin cpuset='0-3'/>
   <iothreadpin iothread='1' cpuset='0-1'/>
 </cputune>
 <os>
   <type arch='x86_64' machine='pc-q35-2.6'>hvm</type>
   <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
   <nvram>/var/lib/libvirt/qemu/nvram/windows_VARS.fd</nvram>
   <bootmenu enable='yes'/>
 </os>
 <features>
   <acpi/>
   <apic/>
  <!-- Delete the hyperv element if you're installing Windows 7 -->
   <hyperv>
     <relaxed state='on'/>
     <vapic state='on'/>
     <spinlocks state='on' retries='8191'/>
   </hyperv>
   <kvm>
     <hidden state='on'/>
   </kvm>
 </features>
 <cpu mode='custom' match='exact'>
   <model fallback='allow'>core2duo</model>
   <topology sockets='1' cores='4' threads='1'/>
   <feature policy='require' name='nx'/>
 </cpu>
 <clock offset='localtime'>
   <timer name='rtc' tickpolicy='catchup'/>
   <timer name='pit' tickpolicy='delay'/>
   <timer name='hpet' present='no'/>
   <timer name='hypervclock' present='yes'/>
 </clock>
 <on_poweroff>destroy</on_poweroff>
 <on_reboot>restart</on_reboot>
 <on_crash>restart</on_crash>
 <pm>
   <suspend-to-mem enabled='no'/>
   <suspend-to-disk enabled='no'/>
 </pm>
 <devices>
   <emulator>/usr/bin/qemu-system-x86_64</emulator>
   <disk type='file' device='disk'>
     <driver name='qemu' type='qcow2' cache='writeback'/>
     <source file='/var/lib/libvirt/images/windows10.qcow2'/>
     <target dev='vda' bus='virtio'/>
     <boot order='1'/>
     <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
   </disk>
   <disk type='file' device='cdrom'>
     <driver name='qemu' type='raw'/>
     <source file='/path/to/virtio-win-0.1.126.iso'/>
     <target dev='sda' bus='sata'/>
     <readonly/>
     <address type='drive' controller='0' bus='0' target='0' unit='0'/>
   </disk>
   <disk type='file' device='cdrom'>
     <driver name='qemu' type='raw'/>
     <source file='/path/to/windows10.iso'/>
     <target dev='sdb' bus='sata'/>
     <readonly/>
     <address type='drive' controller='0' bus='0' target='0' unit='1'/>
   </disk>
   <controller type='sata' index='0'>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
   </controller>
<controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x2'/> </controller>
<interface type='network'> <mac address='52:54:00:fc:e3:35'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='spice' autoport='yes'> <listen type='address'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </sound> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </memballoon> </devices>

<qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,port=0xe0,chassis=1,id=pcie.2,bus=pcie.0,multifunction=on,addr=0x1c'/> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,port=0x18,chassis=2,id=pcie.3,bus=pcie.0,multifunction=on,addr=0x3'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=pcie.2,addr=00.0,multifunction=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=pcie.2,addr=00.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=02:00.0,bus=pcie.3,addr=00.0,multifunction=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=02:00.1,bus=pcie.3,addr=00.1'/> </qemu:commandline>
</domain>

Now load up this domain by running:

virsh define windows10.xml

Before starting the VM we will now unfortunately need to allow libvirt to access our VFIO devices. Identify the paths to your devices:

ls /dev/vfio/
15  16  vfio

We will now need to edit the qemu configuration for libvirt.

sudo gedit /etc/libvirt/qemu.conf

Change the user that the virtual machine runs at to root and also enhance cgroup-device_acl to include the vfio paths we found above.

user = "root"
group = "root"
cgroup_device_acl = [
   "/dev/null", "/dev/full", "/dev/zero",
   "/dev/random", "/dev/urandom",
   "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
   "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
   "/dev/vfio/15", "/dev/vfio/16"
]

Next we’ll need to modify the Apparmor profile for libvirt to give it access to our VFIO devices. The KVM Manage Ubuntu guide was useful here.

sudo gedit /etc/apparmor.d/abstractions/libvirt-qemu

Add the following to the file and while also updating your vfio paths:

  # VFIO access
  /dev/vfio/* rw,

  # USB passthrough
  /dev/bus/usb/[0-9]*/[0-9]* rw,

  # Unknown 16.10
  /proc/[0-9]*/task/[0-9]*/comm rw,
  /run/udev/data/* r,
  /etc/host.conf r,
  /etc/nsswitch.conf r,
  capability wake_alarm,

Restart apparmor and libvirtd:

sudo service apparmor restart
sudo service virtlogd restart

Try to start the Windows 10 guest now using virt-manager

virt-manager

Connect to QEMU/KVM and hit the play button. It’s also somewhat convenient to do USB passthrough here.

If you notice things aren’t working keep an eye on dmesg and then edit the Apparmor profile further to give additional access to libvirt.

Benchmark

My machine seems to have a minor 2 frame per second hit over bare metal in benchmarks. If you would like to analyze my benchmark result further take a look here: http://www.3dmark.com/compare/fs/11182166/fs/8229002

Windows 10 Guest Autostart

If everything was successful you should now be able to autostart your virtual machine on boot with:

virsh autostart windows10

(Optional) Next Steps