Creating a Windows 10 VM on the AMD Ryzen 9 3900X using Qemu 4.0 and VGA Passthrough

Last updated: May 25, 2020

Introduction

I’ve already written a detailed tutorial on VGA passthrough based on QEMU 2.11. Time has passed and more modern distributions like Manjaro, Ubuntu 19.10 or Pop!_OS come with QEMU 4.0 or newer.

A lot has happened since version 2.11. QEMU 4.0 includes numerous changes and improvements such as trim support in the virtio-blk driver, pcie-root-port with PCIe 4.0 support (with Q35-4.0 machine type), as well as improved audio.

The downside is that with these improvements came changes in the QEMU syntax. Today most tutorials use the “Virtual Machine Manager” for configuration within a convenient GUI. Recent versions of virt-manager (that’s the name of the package) include an XML editor. Unfortunately Virtual Machine Manager, a front-end to “libvirt”, doesn’t have much documentation.

The tutorial below is in part inspired by Bryan’s excellent GPU passthrough tutorial. There are, however, some differences. I suggest to have a look at both and decide what’s best for you.

Disclaimer

All information and data provided in this tutorial is for informational purposes only. I make no representations as to accuracy, completeness, recentness, suitability, or validity of any information in this tutorial and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. All information is provided on an as-is basis.

You are aware that by following this tutorial you may risk the loss of data, or may render your computer inoperable. Backup your computer! Make sure that important documents/data are accessible elsewhere in case your computer becomes inoperable.

Documentation

Here is an overview of the documentation for the tools we are going to use:

    • https://libvirt.org/ – libvirt is the toolkit used by Virtual Machine Manager. This is where you look for help when you need to edit the XML configuration (and YES you will have to!). Here some specific links:
      • XML Format – objects such as domains, networks, storage, etc. are configured using XML documents which are described here
      • KVM/QEMU hypervisor driver – example qemu/kvm domain configurations, qemu command passthrough, converting from QEMU args to domain XML and vice versa from XML to QEMU (you may want that).
      • libvirt wiki – community contributed documentation. A good resource for solutions to specific tasks and problems. Take for example networking.
      • Releases – a list of libvirt releases, along with an overview of the changes.
      • virt tools blog planet – a place to check if you want to dig deeper into virtualization and what’s cooking. This blog is maintained by the developers. Under “Subscriptions” is a list of individual blogs by developers that provides further updates and information.
    • QEMU – this is the documentation on QEMU version, 4.2 (as of this writing). It should be relevant for 4.0 too as there haven’t been many changes. See also below under QEMU 4.2.50 Documentation.
    • https://www.qemu.org/ – the main QEMU homepage contains a list of releases and documents the changes.
    • QEMU 4.2.50 Documentation – this is the latest QEMU documentation (QEMU 4.2.50 as of this writing).
    • The VFIO and GPU Passthrough Beginner’s Resource – a list of resources for VGA passthrough.

Hardware Configuration

The following tutorial is based on the hardware listed below. It will likely work with other AMD Ryzen processors, perhaps with other AMD families of processors, and it may even work with Intel processors.

Here is the hardware configuration:

    • AMD Ryzen 9 3900X CPU
    • Gigabyte X570 Aorus Pro motherboard, upgraded to latest BIOS F12e
    • 64 GB RAM
    • Samsung SSD 970 EVO Plus 1TB NVMe drive for guests, set up as LVM drive
    • Samsung SSD 970 EVO Plus 500 GB NVMe drive for the host
    • Gigabyte Nvidia Geforce GTX 970 GPU for the guest
    • PNY Nvidia Quadro 2000 card for host, updated to support UEFI using this BIOS (see also here)
    • A bunch of HDD drives using LVM – around 11 TB internal storage
    • Asus Xonar Essence STX PCIe sound card.

Software

For the host:

    • Pop!_OS Linux based on Ubuntu 19.10
    • QEMU 4.0.0
    • libvirt 5.4.0
    • Virtual Machine Manager (virt-manager) 2.2.1
    • Linux kernel 5.3.0

The guest OS:

VM Resources

Before we start to set up a virtual machine, we need to plan the resources we want to allocate to our Windows VM. My specific use case is photo processing using Adobe Lightroom, Photoshop and other tools. This is why I decided to give the Windows VM all the resources I can while leaving enough RAM to the host to avoid memory swap. Here a breakdown of my VM resources:

    • Windows uses an LVM volume on a 1 TB NVMe drive (the other option would be to install Windows directly on the drive and pass it through to the VM)
    • 48 GB RAM (out of a total of 64GB) backed by hugepages
    • 12 cores / 24 threads = 24 vCPUs – yes, I’m giving everything to the VM.

Most of you will NOT have the same requirements. For this tutorial I am going to assign the following resources to the Windows VM:

    • QCOW2 file or whatever you prefer as storage to install the Windows VM
    • 16 GB RAM backed by hugepages
    • 6 cores / 12 threads = 12 vCPUs.

Of course, you can and should adjust these resources to your requirements.

Tutorial

Setting up the Host for VGA Passthrough

We need to add or modify some settings for the host before we can start with the actual VM installation.

Software Packages and virtio Drivers

Install the  required packages on your Linux host:

sudo apt install qemu-kvm qemu-utils libvirt-daemon-system libvirt-clients  virt-manager ovmf

Download the Windows 10 ISO (you’ll need a valid license to install):

https://www.microsoft.com/en-us/software-download/windows10ISO

Download the virtio driver ISO to be used with the Windows installation from https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.html. Below are the direct links to the ISO images:

Latest VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso

Stable VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

I chose the latest driver ISO.

Make your user member of the kvm and libvirt groups (not sure this is necessary):

sudo usermod -a -G kvm myusername

sudo usermod -a -G libvirt myusername

BIOS Settings

Reboot the PC and enter the BIOS – usually via DEL, F2, F12 or whatever your motherboard manual tells you. You must NOT skip this step. The screenshots below show the BIOS setup procedure for the Gigabyte X570 Aorus Pro motherboard. YMMV.

Go to the “Advanced Mode” screen. Select the “Tweaker” tab and enter “Anvanced CPU Settings”:

Gigabyte X570 Aorus Pro “Advanced Mode” BIOS screen

Enable “SVM Mode”:

Select the “Settings” tab and go to “Miscellaneous”:

Enable “IOMMU”. (At this point you may want to enter the “Trusted Computing” sub-menu and disable that nonsense.):

Important: You also need to enable “ACS” and “AER” under Settings -> AMD CBS.

Select the “System Info.” tab and check your “BIOS Version” – it should be F11 or the newer F12e. Older versions, especially those prior to F10, are broken or no good for VFIO passthrough. If you do have an older BIOS version, read the instructions on how to flash it (by using a FAT16 or FAT32 USB drive with the BIOS file on it – beware of naming restrictions):

When done with the adjustments, save & exit and reboot.

We need to tell Linux about IOMMU. Pop!_OS  19.10 – the Linux host OS – uses the systemd bootloader. There is a tool called “kernelstub” that allows us to add/modify kernel parameters for systemd. In a terminal window, enter:

sudo kernelstub -o "amd_iommu=on amd_iommu=pt hugepages=8192"

Note: If your system uses grub2, edit the /etc/default/grub file and add the following to the “GRUB_CMDLINE_LINUX_DEFAULT” line:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amd_iommu=on amd_iommu=pt hugepages=8192"

Then run update-grub.

Here a short explanation of these parameters:

amd_iommu=on is the kernel parameter that enables IOMMU on AMD CPUs.

The equivalent for Intel CPUs is intel_iommu=on.

amd_iommu=pt tells the kernel to bypass DMA translation to the memory, which may improve performance.

hugepages=8192 tells the kernel to set aside 8192 hugepages. Important: setting hugepages is optional! On this system, each hugepage is the equivalent of 2 Megabyte. 8192 hugepages correspond to 16 Gigabyte of RAM. Once these static hugepages are configured, they cannot be claimed by the host. You should adjust these numbers to fit the memory you want to assign to your VM. Tip: Use multiples of 1024 !

Note: Different platforms can have different hugepage sizes. On this system you can define 2 Megabyte or 1 Gigabyte hugepages, or a mix of both. For more on that, see Configure Hugepages (background information is available here).

Now reboot again!

Let’s see if it worked. Open a terminal and enter:

dmesg | grep AMD-Vi

user@mypc:~$ dmesg | grep AMD-Vi
[ 3.697093] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 3.702890] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 3.702890] pci 0000:00:00.2: AMD-Vi: Extended features (0x58f77ef22294ade):
[ 3.702892] AMD-Vi: Interrupt remapping enabled
[ 3.702893] AMD-Vi: Virtual APIC enabled
[ 3.702893] AMD-Vi: X2APIC enabled
[ 3.702983] AMD-Vi: Lazy IO/TLB flushing enabled

AMD-vi and IOMMU are now enabled and supported.

Bind Passthrough GPU to VFIO Driver

In this tutorial I use 2 separate GPUs: one for the host; a second one for the guest.

Hardware tip: The Gigabyte X570 Aorus Pro motherboard lets you select the “Initial Display Output”, i.e. the GPU used by the host. Another nice feature is “PCIeX16 Bifurcation” to determine how the bandwidth of the PCIeX16 slot is divided. (Some motherboards require the host GPU to be in slot 1.)

Since I’m using 3 PCIe devices (2 GPUs and a sound card), I’ve divided the PCIe bandwidth to 8x4x4. PCIe slot 1 with a X8 bandwidth holds the passthrough GPU, and PCIe slot 2 with a X4 bandwidth holds the host GPU.

We need to determine the PCI bus IDs for our graphics cards. In a terminal window, enter:

lspci | grep VGA

user@mypc:~$ lspci | grep VGA
0b:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
0c:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)

The GPU to pass through (the GeForce GTX 970) is on bus 0b:00.0. Most GPUs have additional devices onboard, such as an audio device, USB, etc. We must pass all devices through to the host. To determine the devices associated with our passthrough GPU, use the following command:

lspci -nn | grep 0b:00.

user@mypc:~$ lspci -nn | grep 0b:00.
0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
0b:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)

In this example, the GPU has only one additional device, an audio device on 0b:00.1.

All devices within the same IOMMU group must be passed to the VM! You find more information on that – as well as exceptions – in my IOMMU Groups – What You Need to Consider post.

Let’s have a look at our IOMMU groups and how PCI devices are split into these groups:

for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort --version-sort


/sys/kernel/iommu_groups/26/devices/0000:06:04.0
/sys/kernel/iommu_groups/27/devices/0000:07:00.0
/sys/kernel/iommu_groups/28/devices/0000:0b:00.0
/sys/kernel/iommu_groups/28/devices/0000:0b:00.1
/sys/kernel/iommu_groups/29/devices/0000:0c:00.0
/sys/kernel/iommu_groups/29/devices/0000:0c:00.1

The graphics card and its two devices (VGA and audio) are within the same IOMMU group 28, and the group contains no additional devices. Perfect!

Tip: Copy the PCI bus IDs for your graphics card – 0000:0b:00.0 and 0000:0b:00.1 in the example above – into a .txt file since we need them in the next step!

We want to make sure that the passthrough GPU binds to the VFIO driver when the PC boots. For more explanations and solutions for common issues, see “Explaining CSM, efifb=off, and Setting the Boot GPU Manually“.

It’s time to install the script that will bind the passthrough GPU to the vfio-pci dummy driver:

sudo nano /etc/initramfs-tools/scripts/init-top/vfio-override.sh

and copy/paste the following script into the file:

#!/bin/sh

DEVS="0000:0b:00.0 0000:0b:00.1"

for DEV in $DEVS; do
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done

modprobe -i vfio-pci

Important: Don’t forget to replace the 0000:0b:00.0 0000:0b:00.1 PCI bus IDs for the ones you determined for your passthrough GPU. The leading “0000” determines the domain. For some CPUs that may be different (e.g. 0001).

Make the vfio-override.sh file executable:

sudo chmod u+x /etc/initramfs-tools/scripts/init-top/vfio-override.sh

Windows 10 releases 1803 and newer require the following option:

echo 'options kvm ignore_msrs=1' | sudo tee -a /etc/modprobe.d/kvm.conf

To load vfio and other required modules at boot, edit the /etc/initramfs-tools/modules file:

sudo nano /etc/initramfs-tools/modules

Copy the following to the end of the modules file (it’s important to keep the order):

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vhost-net

Save and close the file.

For the above changes to take effect, enter:

sudo update-initramfs -u

followed by:

sudo kernelstub

to update the boot entries in the EFI folder (this is one of the bugs of Pop!_OS).

Reboot once again.

Create Windows 10 VM Configuration using the Virtual Machine Manager GUI

Open the Virtual Machine Manager GUI.

Connect to QEMU/KVM. Then create a new Virtual Machine by pressing the top left icon (screen with a triangular start button and a star):

Virtual Machine Manager main screen

Select “Local install media…” and click [Forward]:

Create a new virtual machine

Browse to the location of your Windows 10 ISO and select. Tick  “Automatically detect from the installation media…”. Then press [Forward]:

Select Windows 10 ISO for installation

Specify the amount of memory to assign to the VM in Megabyte. 8 GByte should be the minimum for a Windows 10 VM. If you reserved 16 Gigabyte of hugepages memory earlier in this tutorial (that is 8192 hugepages), change the “Memory” value to 16384:

Memory and vCPUs

The following screen allows you to choose from different options.

    • If you want to pass through a disk (more specific a disk controller) to Windows, uncheckEnable storage for this virtual machine“. At a later step you will be able to configure your passthrough storage device. This option comes handy when you already have Windows installed on a disk, and your motherboard/CPU combination allows you to pass through the disk controller / storage device. The disk must be connected to a controller that has its own IOMMU group.
      In my case, the Gigabyte Aorus Plus / Ryzen 3900X combo has two NVMe slots, each with a controller having its own IOMMU group.
      Note: IOMMU groupings in general are often determined by the motherboard BIOS. Sometimes a new BIOS can bring improvements, but don’t bet on it. There are examples where BIOS upgrades broke passthrough entirely.
    • In all other cases, tick “Enable storage for this virtual machine“. You now have the following choices:
      • Create a disk image for the virtual machine” which will be located at the default location /var/lib/libvirt/images/. This freaks me out every time I think of it.
      • Select or create custom storage” to choose a more sane location. Use this option to select or create a storage pool and/or storage volume (for example Qcow2, RAW, or LVM).

Unless you pass through a disk/storage controller, select as shown below and press [Manage…]:

Enable / specify the storage for the virtual machine

If you chose “Select or create custom storage”, you will be presented with the following screen:

Choose storage volume

You can select an existing storage pool (the “Default” is at /var/lib/libvirt/images mentioned already), or create a new pool at the location of your choice. To create a new storage pool, press the [+] button at the bottom left. In the following window, type a name for the new pool. Then select the “Type” from the drop-down menu. There is a long list of choices, but home users will most likely choose one of the following:

    1. dir: Filesystem Directory – select a directory in your Linux file system, for example /home/user/vm-storage. This choice of storage pool predetermines the options for your storage volume – either Qcow2 or RAW. Both create a file in your file system that holds the entire VM image. This is the easiest way to get started with virtualization and has more benefits than drawbacks. Performance is pretty good too.
    2. disk: Physical Disk Device – if you have a disk to spare for your VMs, you could choose this option. BUT: this should not be confused with passing through a disk, respectively a storage controller, where Windows will be able to use its own driver to directly access the storage device.
    3. fs: Pre-Formatted Block Device – like the first option, but a block device for storing Qcow2 and RAW images.
    4. logical: LVM Volume Group – this is my favorite, but it’s not everyones cup of tea. In terms of performance, it’s considered second to passing through the disk controller. Volume groups and individual LV volumes can grow over multiple disks, have snapshot capability and much more. LVM has a learning curve and unfortunately today there is no GUI to manage logical volumes (the Gnome tool system-config-lvm was perfect for the job, but it’s discontinued 😥 ). Unless you are familiar with LVM, I cannot recommend it, despite many benefits.
    5. zfs: ZFS Pool – integrates the file system with LVM capabilities, provides redundancy and includes protection against data corruption. In a way, ZFS replaces LVM and improves on it. However, it’s not for the faint-hearted. If you are familiar with ZFS, by all means go for it.

For most of us the choice will be option 1 – dir: Filesystem Directory. Make sure to select the Target Path of YOUR choice:

Configure storage pool

Once you got the storage pool set up, you need to configure the storage volume for your Windows 10 VM. 50 GB of storage is probably the minimum; if you plan to install software and games, consider 300GB or more:

Create a storage volume for the Windows VM

After you specified the name and capacity of your VM storage volume, click [Finish].

In the Window below, tick the “Customize configuration before install” box. This step is crucial!

Specify the name of the VM. In my case I selected a preconfigured bridge under “Network selection“. Bridging is the preferred network setup for wired connections:

Customize configuration before install

Click [Finish] when done.

In the following “Overview” window, make sure that “Chipset: Q35” is selected.

Under “Firmware“, select “UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd” as shown below. Click [Apply]:

Select UEFI firmware

Select the “CPUs” configurator in the left column to configure the number of CPUs the VM can have. I’ve selected 12 vCPUs out of a total of 24. Make sure both “Current allocation” and “Maximum allocation” have the same number of CPUs.

Untick “Copy host CPU configuration”. In the “Model:” field below, type “host-passthrough“. Do NOT select from the drop-down menu – this option isn’t available (yet). There can be a substantial performance difference between “copy host CPU configuration” and “host-passthrough” – you want the latter.

Note for AMD Ryzen and EPYC users: There are predefined EPYC and EPYC-IPBP options in the drop-down menu that work well with AMD EPYC and RYZEN CPUs. You may want to try them out and see if it can improve performance. What I’ve seen so far is that they can influence memory and CPU cache performance.

Select “Topology” and specify “Sockets: 1”, “Cores: 6”, and “Threads: 2”. This gives our Windows VM 12 virtual CPUs (vCPU). Each vCPU represents 1 thread. The AMD Ryzen 3900X has 12 core and 24 threads in total, so I am assigning half of the CPU resources to the VM. Make sure you select the right numbers for your CPU:

CPU configuration with host-passthrough

Select “SATA Disk 1” and open “Advanced options”. Select “Disk bus: VirtIO“. Under “Performance options”, select “Cache mode: none“and “IO mode: native” for best performance in most cases (later on you may want to experiment with option “threaded” too). If the drive is an SSD or NVMe drive, select “Discard mode: unmap”, else leave at default setting:

SATA drive with VirtIO driver and unmap (trim) for SSD

After clicking [Apply], the left column reads VirtIO Disk 1 to reflect the change:

SATA disk with VirtIO support for improved performance

Select “NIC:…” and choose “Device model: virtio” for improved network speed:

VirtIO support for network

Select the “Sound ich9” device and make sure HDA (ICH9) is specified. New updates of Windows 10 do not work with ICH6:

Configure HDA (ICH9) for sound device

Click [Add Hardware] and select “Storage” on the top of the list. Choose “Select or create custom storage“, select the path to the virtio-win-…iso file and select “Device type: CDROM device“:

Configure VirtIO driver ISO as CD-ROM

Now comes the graphics card. Click [Add Hardware] and select “PCI Host Device“. Select the first entry of the VGA device you wish to pass through, in our case 0000:0B:00.0 (the GTX 970) and click [Finish]:

Add passthrough GPU as PCI host device

Repeat the last step with all devices associated with this GPU (that is, all devices in the same IOMMU group that must be passed through). In my case its only one more device – the audio device of the GPU – 0000:0B:00.1. Click [Finish] when done:

GPU audio part needs to be configured for passthrough, too

After you configured your passthrough GPU as PCI Host devices (modern GPUs often consist of 4 devices – graphics, audio, USB and UCSI), you may need to add additional PCI devices to pass through. For example the disk controller of your Windows drive (see above “Create a new virtual machine – Step 4 of 5”), a USB controller, or a sound card.

Note regarding the Gigabyte X570 Aorus Pro motherboard: I tried to pass through the USB host controller at 0e:00.3 (IOMMU group 33), but that didn’t work.
Good news: The second USB host controller in IOMMU group 22 works!!! You need to pass through the following PCI host devices: 0000:08:00.0, 0000:08:00.1 and 0000:08:00.3.
For a nice little bash script that lists the USB bus and IOMMU group associations, see the post by Level1Techs forum member “two2”.

For the GPU passthrough device you’ll get a screen like below, with the ROM BAR option ticked. Leave as is:

Leave the ROM BAR option ticked

It’s time to configure the “Boot Options“. If you are going to install Windows onto a new storage device, select the boot order as shown in the screenshot below. If you have Windows installed on a drive and pass through that drive / controller to the VM, change the order so that the PCI device is the first in the list. In any case, make sure to tick “Enable boot menu“:

Configure the boot options – in the screenshot above the VM will boot from the Windows 10 ISO

Unless you are passing through a USB host controller via PCI passthrough, you need to pass through your keyboard and mouse using the USB host device option.

Click [Add Hardware] and select “USB Host Device“. Select your mouse to pass through to the VM and click [Finish]. Repeat this step for your keyboard:

Specify the USB mouse and keyboard to be used with the Windows VM

Perhaps you noticed that I removed many predefined devices that aren’t needed. You may want to keep the Graphics, Video, Tablet and some other devices to have an emulated display in a window. This can be useful for debugging, when else you would not be able to use your mouse and keyboard on the host while the guest is running.

Note: Different from the tutorial, I use a multi-device wireless mouse and keyboard that connect to two different USB receivers. I also pass through one of the two USB controllers as a PCI device. This allows me to switch the mouse and keyboard between host and guest at the press of a button. Unless the host freezes, I’m always in control.

I chose to get rid of all the useless parallel, serial, tablet, etc. devices that clutter the configuration screen. Not only that, they also demand CPU and other resources from the guest, so why have them there in the first place?

Here is how my Windows 10 configuration screen looks like after the cleanup:

List of devices that have been configured for the VM – the unifying receiver is connected to both mouse and keyboard

The Unifying Receiver shown in the screenshot receives input from both the mouse and the keyboard.

This is how far the GUI support goes. There are still a number of steps to perform before you can start the Windows VM!

Additional XML Configurations

The configuration capabilities of Virtual Machine Manager are limited. Luckily they gave us an integrated XML editor. In order to use it, “Enable XML editing” under “Edit->Preferences”.

In the configuration window, select XML:

Virtual Machine Manager XML configuration screen

If you pass through a Nvidia graphics card, you need to add:

<vendor_id state="on" value="0123456789ab"/>

as well as:

<kvm>
   <hidden state="on"/>
</kvm>

(Note: Professional Nvidia cards starting with the Quadro 2000 upwards do not require the “vendor_id” and “hidden state” entries to fool the Nvidia driver – they are specified by Nvidia to run in virtual environments.)

For better performance, enable the Hyper-V Enlightenments:

<vpindex state='on'/>
<synic state='on'/>
<stimer state='on'/>
<frequencies state='on'/>

The aforementioned options go into the following locations:

<features>
  <acpi/>
  <apic/>
  <hyperv>
    <relaxed state=”on”/>
    <vapic state=”on”/>
    <spinlocks state=”on” retries=”8191″/>
    <vendor_id state=”on” value=”0123456789ab”/>
    <vpindex state=’on’/>
    <synic state=’on’/>
    <stimer state=’on’/>
    <frequencies state=’on’/>

  </hyperv>
  <kvm>
    <hidden state=”on”/>
  </kvm>
  <vmport state=”off”/>
</features>

Click [Apply].

In order to use our predefined hugepages memory, insert:

<memoryBacking>
  <hugepages/>
</memoryBacking>

as shown below:

<memory>16777216</memory>
<currentMemory>16777216</currentMemory>
<memoryBacking>
  <hugepages/>
</memoryBacking>

Verify that “memory” and “currentMemory” have the same values and are multiples of 1024 (16777216/1024=16384).

Click [Apply].

Let’s look once more at the CPU topology options. The Ryzen 3900X is a 12-core/24-thread CPU. In my own system I give the Windows VM all CPU cores/threads. That seems to work well with running Adobe Lightroom and Photoshop under Windows. However, if you run multiple VMs simultaneously, or if you use the VM for gaming, there are better strategies.

As mentioned before, in this tutorial we are going to assign 1 socket (the system has only 1 CPU), 6 cores/12 threads to the VM. In addition we specify the “topoext” option to let the guest know about the CPU architecture. “cache passthrough” passes the actual CPU cache information to the virtual machine.

<cpu mode="host-passthrough" check="none">
  <topology sockets="1" cores="6" threads="2"/>
  <cache mode='passthrough'/>
  <feature policy='require' name='topoext'/>
</cpu>

Click [Apply].

There are additional performance tweaks such as iothreads, cpu pinning etc. But for now I like to focus on getting the VM to work. That requires some tricks described in the next chapter.

Bugs and Regressions

Unfortunately QEMU 3.1 and 4.0 introduced some regressions or bugs. For more information, see Windows 10 client issues.

Let’s tackle them one by one:

Qemu 4.0.0 hangs the host and Windows 10 client

QEMU 4.0.0 hangs the host and Windows 10 client, for example when passing through a Nvidia card. For an under-the-hood explanation see here.

Solution for virt-manager: Add

<ioapic driver="kvm"/>

to the configuration as shown below:

  </kvm>
  <vmport state=”off”/>
  <ioapic driver=”kvm”/>
</features>

Note: When using a QEMU script, add the following option under the qemusystemx86_64 command:

kernel_irqchip=on

This workaround disables the irq split that was introduced as a default in QEMU 4.0.0. The workaround should not influence the performance.

Note: The issue has been resolved in kernel 5.6 where this option is no more required (please check to make sure).

vhost_region_add_section: Overlapping but not coherent sections

This bug can lead to network disconnection and performance drop. You may or may not notice this issue, but check your win10.log file under:

/var/log/libvirt/qemu

Here is what I found in my win10.log file:

2020-03-20T09:58:01.415434Z qemu-system-x86_64: vhost_region_add_section: Overlapping but not coherent sections at 108000
2020-03-20T09:58:01.415435Z qemu-system-x86_64: vhost_region_add_section: Overlapping but not coherent sections at 109000
2020-03-20T09:58:01.415436Z qemu-system-x86_64: vhost_region_add_section: Overlapping but not coherent sections at 10a000

As a workaround to the problem, we need to disable vhost. Add the following to the <interface> section of your XML configuration:

<driver name="qemu"/>

like this:

<interface type=”bridge”>
  <mac address=”52:54:00:e1:49:c3″/>
  <source bridge=”br0″/>
  <model type=”virtio”/>
  <driver name=”qemu”/>
  <address type=”pci” domain=”0x0000″ bus=”0x01″ slot=”0x00″ function=”0x0″/>
</interface>

Note 1: Disabling vhost does impede network performance, but is far better than any of the other choices. In most cases you won’t notice a difference (we are talking around 2.5 GBit versus 10 Gbit, but let’s face it – can you use the full 10 Gbit bandwidth?). Another workaround is to turn off the hypervisor extension “stimer” (see above). Feel free to experiment.

Note 2: QEMU version 5.0 fixes  this issue! (Manjaro and Arch Linux users can get QEMU 5.0…RC via AUR as “qemu-git”. Ubuntu and Pop!_OS users will have to build it from source, or wait.)
Important: The current version of QEMU in Manjaro, version 5.0.0-5, as released these days, contains a bug. Use qemu-git from AUR! QEMU 5.0.0-6 fixes the problem.

No sound – pulseaudio fails

QEMU 4.0 brings improved audio support, but not entirely without hiccups. First virt-manager 2.2.1 doesn’t yet support the new syntax. We need to configure the audio support manually using QEMU commands.

Inside the Virtual Machine Manager GUI, at the very top of the XML configuration, change:

<domain type="kvm">

to:

<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">

This is how it looks then:

<domain xmlns:qemu=”http://libvirt.org/schemas/domain/qemu/1.0″ type=”kvm”>
  <name>win10</name>
  <uuid>d23aeb98-etc-pp-and-moreofit</uuid>
  <metadata>

After the above declaration, we can insert the QEMU option into the XML configuration. But first we need to identify our pulseaudio sound server. Enter in a terminal window:

pax11publish -d

Server: {a14b_lotsofit_884c}unix:/run/user/1000/pulse/native
Cookie: 3417_even_more_numbers_letters_isnt_it_fun_1accb9…

Note the location of the sound server /run/user/1000/pulse/native – we need it in the following step.

Inside Virtual Machine Manager, place the following lines at the bottom of the XML configuration, just above the last line </domain>:

<qemu:commandline>
  <qemu:arg value="-audiodev"/>
  <qemu:arg value="pa,id=pa1,server=/run/user/1000/pulse/native"/>
</qemu:commandline>

We want to run the VMs under our own user  name. Edit the following file with your editor of choice:

sudo nano /etc/libvirt/qemu.conf

and search for the “user =” entry. Specify your user name and remove the  hashtag:

user = "myusername"

Save and close the file. Now restart libvirtd:

sudo systemctl restart libvirtd

Unfortunately that may not be enough. First thing we test the new setting and start the Windows VM. If you still get a “sound disabled” icon in Windows and you can’t find the hda sound device in the Windows Sound Troubleshooter, close the VM.

Check the following log file:

cat /var/log/libvirt/qemu/win10.log

with “win10” being the name of your Windows VM as specified in Virtual Machine Manager. At or towards the end of the log you should see the following:

pulseaudio: pa_context_connect() failed
pulseaudio: Reason: Connection refused
pulseaudio: Failed to initialize PA contextaudio: Could not init `pa’ audio driver
audio: warning: Using timer based audio emulation

Some suggest to grant the root user access to the sound server. Assuming you run virt-manager / libvirt as root, that might perhaps work.

Before you go any further on that, have a look at syslog:

cat /var/log/syslog | grep DENIED

Mar 21 00:16:11 mypc kernel: [45172.799423] audit: type=1400 audit(1584742571.201:57): apparmor=”DENIED” operation=”open” profile=”libvirt-d23aeb98-3f95-4ed5-a74b-c88b74e89b15″ name=”/etc/pulse/client.conf.d/” pid=14959 comm=”qemu-system-x86″ requested_mask=”r” denied_mask=”r” fsuid=1000 ouid=0
Mar 21 00:16:11 mypc kernel: [45172.799431] audit: type=1400 audit(1584742571.201:58): apparmor=”DENIED” operation=”open” profile=”libvirt-d23aeb98-3f95-4ed5-a74b-c88b74e89b15″ name=”/dev/shm/” pid=14959 comm=”qemu-system-x86″ requested_mask=”r” denied_mask=”r” fsuid=1000 ouid=0
Mar 21 00:16:11 mypc kernel: [45172.799554] audit: type=1400 audit(1584742571.201:59): apparmor=”DENIED” operation=”connect” profile=”libvirt-d23aeb98-3f95-4ed5-a74b-c88b74e89b15″ name=”/run/user/1000/pulse/native” pid=14959 comm=”qemu-system-x86″ requested_mask=”wr” denied_mask=”wr” fsuid=1000 ouid=1000

As you can see, Apparmor is the culprit. To test this theory, we only need to disable apparmor for libvirt/QEMU and restart the libvirt-daemon. Use your favorite editor and once again edit the following file:

sudo nano /etc/libvirt/qemu.conf

Look for “security_driver” and change to read as follows (without hashtag!):

security_driver = "none"

Safe the file and restart libvirtd:

sudo systemctl restart libvirtd

Now start your Windows VM and see if it worked. By now you should have sound output. If not, check again the log files.

Assuming sound is working, we need to modify the default apparmor configuration for new and existing VMs. Edit:

sudo nano /etc/apparmor.d/abstractions/libvirt-qemu

and insert the following access rules below the line reading
/var/lib/dbus/machine-id r,“:

/etc/pulse/client.conf.d/** r,
/dev/shm/ r,
owner /run/user/1000/pulse/native rw,
/etc/machine-id r,

Note: The “owner” statement is optional – if you keep having trouble with sound, remove “owner” and see if it helps.

Save and quit.

Enable  apparmor:

sudo nano /etc/libvirt/qemu.conf

and add the hashtag:

#security_driver = "none"

Save and quit. Then restart libvirtd:

sudo systemctl restart libvirtd

and start the Windows VM. This should work.

Performance Tuning

The Windows VM you just created should already perform very well. But there are definitely ways to further improve performance. Instead of repeating what I or others already wrote, here are some links to further information:

In case the Virtual Machine Manager didn’t give you the choice to select a bridge as network interface, you may want to configure one. Note that a bridge only works with wired connetions, not Wifi.

To create a network bridge to be used by virt-manager, follow these steps:

    1. sudo install network-manager-gnome
    2. In a terminal window, type nm-connection-editor
    3. Setup the network as described here.
    4. Within Virtual Machine Manager, select the bridged network connection.

Benchmarks

Have a look at my first benchmarks here.

Or see my latest Passmark 10 benchmark.

Credits

I’ve been literally using hundreds of online sources to research this tutorial, in  addition to my own notes. However, among the many sources, there are some that stick out.

First and foremost is Bryan Steiner’s comprehensive “gpu-passthrough-tutorial“. He describes a number of new concepts in his refreshing and very well written tutorial. It also includes a chapter about performance tuning that you definitely should look into.

Another great source is Mathias Hueber’s “Beginner friendly guide to windows virtual machines with GPU passthrough on Ubuntu 18.04; or how to play competitive games in a virtual machine“. It contains a wealth of information as well as an optimization and troubleshooting section.

For a more comprehensive list of resources, see the References section in my “Running Windows 10 on Linux using KVM with VGA Passthrough” tutorial.

This site uses Akismet to reduce spam. Learn how your comment data is processed.