Running Windows 10 on Linux using KVM with VGA Passthrough

Table of Contents

The Need

You want to use Linux as your main operating system, but still need Windows for certain applications unavailable under Linux. You need top notch (3D) graphics performance under Windows for computer games, photo or video editing, etc. And you do not want to dual-boot into Linux or Windows. In that case read on.

Many modern CPUs have built-in features that improve the performance of virtual machines (VM), up to the point where virtualised systems are indistinguishable from non-virtualised systems. This allows us to create virtual machines on a Linux host platform without compromising performance of the (Windows) guest system.

For some benchmarks of my current system, see Windows 10 Virtual Machine Benchmarks

The Solution

In the tutorial below I describe how to install and run Windows 10 as a KVM virtual machine on a Linux Mint or Ubuntu host. The tutorial uses a technology called VGA passthrough (also referred to as “GPU passthrough” or “vfio” for the vfio driver used) which provides near-native graphics performance in the VM. I’ve been doing VGA passthrough since summer 2012, first running Windows 7 on a Xen hypervisor, switching to KVM and Windows 10 in December 2015. The performance – both graphics and computing – under Xen and KVM has been nothing less than stellar!

The tutorial below will only work with suitable hardware! If your computer does not fulfill the basic hardware requirements outlined below, you won’t be able to make it work.

The tutorial is not written for the beginner! I assume that you do have some Linux background, at least enough to be able to restore your system when things go wrong.

I am also providing links to other, similar tutorials that might help. Last not least, you will find links to different forums and communities where you can find further information and help.

Note: The tutorial was originally posted on the Linux Mint forum.

Disclaimer

All information and data provided in this tutorial is for informational purposes only. I make no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this tutorial and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. All information is provided on an as-is basis.

You are aware that by following this tutorial you may risk the loss of data, or may render your computer inoperable. Backup your computer! Make sure that important documents/data are accessible elsewhere in case your computer becomes inoperable.

For a glossary of terms used in the tutorial, see Glossary of Virtualization Terms.

Tutorial

Note for Ubuntu users: My tutorial uses the “xed” command found in Linux Mint Mate to edit documents. You will have to replace it with “gedit” or whatever editor you use in Ubuntu/Xubuntu/Lubuntu…

Important Note: This tutorial has been edited to reflect the latest Linux Mint 19 and Ubuntu 18.04 syntax. If you use an older release, make sure to use the appropriate commands.

Part 1 – Hardware Requirements

For this tutorial to succeed, your computer hardware must fulfill all of the following requirements:

IOMMU support

In Intel jargon its called VT-d. AMD calls it variously AMD Virtualisation, AMD-Vi, or Secure Virtual Machine (SVM). Even IOMMU has surfaced. If you plan to purchase a new PC/CPU, check the following websites for more information:

Unfortunately IOMMU support, specifically ACS (Access Control Services) support, varies greatly between different Intel CPUs and CPU generations. Generally speaking, Intel provides better ACS or device isolation capabilities for its Xeon and LGA 2011 / LGA 2066 high-end desktop CPUs than for other VT-d enabled CPUs.

The first link above provides a non-comprehensive list of CPU/motherboard/GPU configurations where users were successful with GPU passthrough. When building a new PC, make sure you purchase components that support GPU passthrough.

Most PC / motherboard manufacturers disable IOMMU by default. You will have to enable IOMMU it in the BIOS. To check your current CPU / motherboard IOMMU support and enable it, do the following:

  1. Reboot your PC and enter the BIOS setup menu (usually you press F2, DEL, or similar during boot to enter the BIOS setup).
  2. Search for IOMMU, VT-d, SVM, or “virtualisation technology for directed IO” or whatever it may be called on your system. Turn on VT-d / IOMMU.
  3. Save and Exit BIOS and boot into Linux.
  4. Edit the /etc/default/grub file (you need root permission to do so). Open a terminal window (Ctrl+Alt+T) and enter (copy/paste):
    xed admin:///etc/default/grub
    (use gksudo gedit /etc/default/grub for older Linux Mint/Ubuntu releases)
    Here is my /etc/default/grub file before the edit:
    GRUB_DEFAULT=0
    #GRUB_HIDDEN_TIMEOUT=10
    #GRUB_HIDDEN_TIMEOUT_QUIET=true
    GRUB_TIMEOUT_STYLE=countdown
    GRUB_TIMEOUT=0
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT=”quiet”
    GRUB_CMDLINE_LINUX=””

    Look for the line that starts with GRUB_CMDLINE_LINUX_DEFAULT=”…”. You need to add one of the following options to this line, depending on your hardware:
    Intel CPU:
    intel_iommu=on
    AMD CPU:
    amd_iommu=on
    Save the file and exit. Then type:
    sudo update-grub
  5. Now check that IOMMU is actually supported. Reboot the PC. Open a terminal window.
    On AMD machines use:
    dmesg | grep AMD-Vi
    The output should be similar to this:

    AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
    AMD-Vi: Lazy IO/TLB flushing enabled
    AMD-Vi: Initialized for Passthrough Mode

    Or use:
    cat /proc/cpuinfo | grep svm

On Intel machines use:
dmesg | grep "Virtualization Technology for Directed I/O"

The output should be this:
[ 0.902214] DMAR: Intel(R) Virtualization Technology for Directed I/O

If you do not get this output, then VT-d or AMD-V is not working – you need to fix that before you continue! Most likely it means that your hardware (CPU) doesn’t support IOMMU, in which case there is no point continuing this tutorial 😥 . Check again to make sure your CPU supports IOMMU. If yes, the cause may be a faulty motherboard BIOS. See if you find a newer version and update your motherboard BIOS (be careful, flashing the BIOS can potentially brick your motherboard).

Two graphics processors

In addition to a CPU and motherboard that supports IOMMU, you need two graphics processors (GPU):

1. One GPU for your Linux host (the OS you are currently running, I hope);

2. One GPU (graphics card) for your Windows guest.

We are building a system that runs two operating systems at the same time. Many resources like disk space, memory, etc. can be switched forth and back between the host and the guest, as needed. Unfortunately the GPU cannot be switched or shared between the two OS, at least not in an easy way. (There are ways to reset the graphics card as well as the X server in Linux so you could get away with one graphics card, but I personally believe it’s not ideal. See for example here and here for more on that.)

If, like me, you use Linux for the everyday stuff such as emails, web browsing, documents, etc., and Windows for gaming, photo or video editing, you’ll have to give Windows a more powerful GPU, while Linux will run happily with an inexpensive GPU, or the integrated graphics processor (IGP). (You can also create a Linux VM with GPU passthru if you need Linux for gaming or graphics intensive applications.)

UEFI support in the GPU used with Windows

In this tutorial I use UEFI to boot the Windows VM. That means that the graphics card you are going to use for the Windows guest must support UEFI – most newer cards do. You can check here if your video card and BIOS support UEFI. If you run Windows, download and run GPU-Z and see if there is a check mark next to UEFI. (For more information, see here.)

Nvidia GTX 970
GPU-Z with graphics card details

There are several advantages to UEFI, namely it starts faster and overcomes some issues associated with legacy boot (Seabios).

If you plan to use the Intel IGD (integrated graphics device) for your Linux host, UEFI boot is the way to go. UEFI overcomes the VGA arbitration problem associated with the IGD and the use of the legacy Seabios.
If, for some reason, you cannot boot the VM using UEFI, and you want to use the Intel IDG for the host, you need to compile the i915 VGA arbiter patch into the kernel. Before you do, check the note below. For more on VGA arbitration, see here. For the i915 VGA arbiter patch, look here or under Part 15 – References.

Note: If your GPU does NOT support UEFI, there is still hope. You might be able to find a UEFI BIOS for your card at TechPowerUp Video BIOS Collection. A Youtube blogger calling himself Spaceinvader has produced a very helpful video on using a VBIOS.

If there is no UEFI video BIOS for your Windows graphics card, you will have to look for a tutorial using the Seabios method. It’s not much different from this here, but there are some things to consider.

Laptop users with Nvidia Optimus technology: Misairu_G (username) published an in-depth guide to VGA passthrough on laptops using Nvidia Optimus technology – see GUIDE to VGA passthrough on Nvidia Optimus laptops. (For reference, here some older posts on the subject: https://forums.linuxmint.com/viewtopic.php?f=231&t=212692&p=1300764#p1300634.)

Part 2 – Installing Qemu / KVM

The Qemu release shipped with Linux Mint 19 is version 2.11 and supports the latest KVM features.

In order to have Linux Mint “remember” the installed packages, use the Software Manager to install the following packages:

qemu-kvm
qemu-utils
seabios
ovmf
hugepages
cpu-checker

Linux Mint
Software Manager

For AMD Ryzen, see also here (note that Linux Mint 19/Ubuntu 18.04 only require the BIOS update).

Alternatively, use
sudo apt install qemu-kvm qemu-utils seabios ovmf hugepages cpu-checker
to install the required packages.

Part 3 – Determining the Devices to Pass Through to Windows

We need to find the PCI ID(s) of the graphics card and perhaps other devices we want to pass through to the Windows VM. Normally the IGP (the GPU inside the processor) will be used for Linux, and the discrete graphics card for the Windows guest. My CPU does not have an integrated GPU, so I use 2 graphics cards. Here my hardware setup:

  • GPU for Linux: Nvidia Quadro 2000 residing in the first PCIe graphics card slot.
  • GPU for Windows: Nvidia GTX 970 residing in the second PCIe graphics card slot.

To determine the PCI bus number and PCI IDs, enter:
lspci | grep VGA

Here is the output on my system:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation Device 13c2 (rev a1)

The first card under 01:00.0 is the Quadro 2000 I want to use for the Linux host. The other card under 02:00.0 I want to pass to Windows.

Modern graphics cards usually come with an on-board audio controller, which we need to pass through as well. To find its ID, enter:
lspci -nn | grep 02:00.

Substitute “02:00.” with the bus number of the graphics card you wish to pass to Windows, without the trailing “0“. Here is the output on my computer:
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c2] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)

Write down the bus numbers (02:00.0 and 02:00.1 above), as well as the PCI IDs (10de:13c2 and 10de:0fbb in the example above). We need them in the next part.

Now check to see that the graphics card resides within its own IOMMU group:
find /sys/kernel/iommu_groups/ -type l

For a sorted list, use:
for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort --version-sort

Look for the bus number of the graphics card you want to pass through. Here is the (shortened) output on my system:

/sys/kernel/iommu_groups/19/devices/0000:00:1f.3
/sys/kernel/iommu_groups/20/devices/0000:01:00.0
/sys/kernel/iommu_groups/20/devices/0000:01:00.1
/sys/kernel/iommu_groups/21/devices/0000:02:00.0
/sys/kernel/iommu_groups/21/devices/0000:02:00.1
/sys/kernel/iommu_groups/22/devices/0000:05:00.0
/sys/kernel/iommu_groups/22/devices/0000:06:04.0

Make sure the GPU and perhaps other PCI devices you wish to pass through reside within their own IOMMU group. In my case the graphics card and its audio controller designated for passthrough both reside in IOMMU group 21. No other PCI devices reside in this group, so all is well.

If your VGA card shares an IOMMU group with other PCI devices, see IOMMU GROUP CONTAINS ADDITIONAL DEVICES for a solution!

Next step is to find the mouse and keyboard (USB devices) that we want to assign to the Windows guest. Remember, we are going to run 2 independent operating systems side by side, and we control them via mouse and keyboard.


About keyboard and mouse

Depending whether and how much control you want to have over each system, there are different approaches:

1. Get a USB-KVM (Keyboard/VGA/Mouse) switch. This is a small hardware device with usually 2 USB ports for keyboard and mouse as well as a VGA or (the more expensive) DVI or HDMI graphics outputs. In addition the USB-KVM switch has two USB cables and 2 VGA/DVI/HDMI cables to connect to two different PCs. Since we run 2 virtual PCs on one single system, this is viable solution. See also my Virtualization Hardware Accessories post.
Advantages:
– Works without special software in the OS, just the usual mouse and keyboard drivers;
– Best in performance – no software overhead.
Disadvantages:
– Requires extra (though inexpensive) hardware;
– More cable clutter and another box with cables on your desk;
– Requires you to press a button to switch between host and guest and vice versa;
– Need to pass through a USB port or controller – see below on IOMMU groups.

2. Without spending a nickel you can simply pass through your mouse and keyboard when the VM starts.
Advantages:
– Easy to implement;
– No money to invest;
– Good solution for setting up Windows.

There are at least two ways to accomplish this task. I will describe both options.

3. Synergy (http://symless.com/synergy/) is a commercial software solution that, once installed and configured, allows you to interact with two PCs or virtual machines.
Advantages:
– Most versatile solution, especially with dual screens;
– Software only, easy to configure;
– No hardware purchase required.
Disadvantages:
– Requires the installation of software on both the host and the guest;
– Doesn’t work during Windows installation (see option 2);
– Costs $10 for a Basic, lifetime license;
– May produce lag, although I doubt you’ll notice unless there is something wrong with the bridge configuration.

4. “Multi-device” bluetooth keyboard and mouse that can connect to two different devices and switch between them at the press of a button (see for example here):
Advantages:
– Most convenient solution;
– Same performance as option 1.
Disadvantages:
– Price.
– Make sure the device supports Linux, or that you can return it if it doesn’t!

I first went with option 1 for robustness and universality, but have replaced it with option 4. I’m now using a Logitech MX master BT mouse and a Logitech K780 BT keyboard. See here for how to pair these devices to the USB dongles.

Both option 1 and 4 usually require to pass through a USB PCI device to the Windows guest. I had a need for both USB2 and USB3 ports in my Windows VM and I was able to pass through two USB controllers to my Windows guest, using PCI passthrough.


For the VM installation we choose option 2 (see above), that is we pass our keyboard and mouse through to the Windows VM. To do so, we need to identify their USB ID:
lsusb

Here my system output (truncated):

Bus 010 Device 006: ID 045e:076c Microsoft Corp. Comfort Mouse 4500
Bus 010 Device 005: ID 045e:0750 Microsoft Corp. Wired Keyboard 600

Note down the IDs: 045e:076c and 045e:0750 in my case.

Part 4 – Prepare for Passthrough

In order to make the graphics card available to the Windows VM, we will assign a “dummy” driver as a place holder: vfio-pci. To do that, we first have to prevent the default driver to bind to the graphics card.

(One way to accomplish that is by blacklisting driver modules, or by using Kernel Mode Settings. For more on Kernel Mode Setting, see https://wiki.archlinux.org/index.php/kernel_mode_setting.)

Note: If you have two identical graphics cards for both the host and the VM, the method below won’t work. In that case see Using the driver_override feature.

The best method to ensure that the graphics card (or any other PCI device) uses the vfio-pci driver is to create a module alias (thanks to this post).

Run the following command:
cat /sys/bus/pci/devices/0000:02:00.0/modalias
where 0000:02:00.0 is the PCI bus number of your graphics card obtained in Part 3 above. The output will look something like:
pci:v000010DEd000013C2sv00001458sd00003679bc03sc00i00

Repeat above command with the PCI bus number of the audio part:
cat /sys/bus/pci/devices/0000:02:00.1/modalias
where 0000:02:00.1 is the PCI bus number of your graphics card’ audio device.

In the terminal window, enter the following:
sudo -i
followed by your password to have a root terminal.

Open or create /etc/modprobe.d/local.conf:
xed admin:///etc/modprobe.d/local.conf
and copy and paste the results from the two cat /sys/… commands above. Then precede the lines with “alias” and append the lines with “vfio-pci”, as shown below:
alias pci:v000010DEd000013C2sv00001458sd00003679bc03sc00i00 vfio-pci
alias pci:v000010DEd00000FBBsv00001458sd00003679bc04sc03i00 vfio-pci

At the end of that file, add the following line:
options vfio-pci ids=10de:13c2,10de:0fbb
where 10de:13c2 and 10de:0fbb are the PCI IDs for your graphics card’ VGA and audio part, as determined in the previous paragraph.

You can also add the following option below the options vfio-pci entry:
options vfio-pci disable_vga=1
(The above entry is only valid for 4.1 and newer kernels and UEFI guests. It helps prevent VGA arbitration from interfering with host devices.)

Save the file and exit the editor.

Some applications like Passmark and Windows 10 releases 1803 and newer require the following option:
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf

To load vfio and other required modules at boot, edit the /etc/initramfs-tools/modules file:
xed admin:///etc/initramfs-tools/modules

At the end of the file add in the order listed below:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vhost-net

Save and close the file.

Any changes in /etc/modprobe.d require you to update the initramfs. Enter at the command line:
update-initramfs -u

Part 5 – Network Settings

For performance reasons it is best to create a virtual network bridge that connects the VM with the host. In a separate post I have written a detailed tutorial on how to set up a bridge using Network Manager.

Note: Bridging only works for wired networks. If your PC is connected to a router via a wireless link (Wifi), you won’t be able to use a bridge. The easiest way to get networking inside the Windows VM is NOT to use any network setup. You also need to delete the network configuration in the qemu command (script). If you still want to use a bridged network, there are workarounds such as routing or ebtables (see https://wiki.debian.org/BridgeNetworkConnections#Bridging_with_a_wireless_NIC).

Once you’ve setup the network, reboot the computer and test your network configuration – open your browser and see if you have Internet access.

Part 6 – Setting up Hugepages

Moved to Part 18 – Performance Tuning. This is a performance tuning measure and not required to run Windows on Linux. See Configure Hugepages under Part 18 – Performance Tuning.

Part 7 – Download the VFIO drivers

Download the VFIO driver ISO to be used with the Windows installation from https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.html. Below are the direct links to the ISO images:

Latest VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso

Stable VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

I chose the latest driver ISO.

Part 8 – Prepare Windows VM Storage Space

We need some storage space on which to install the Windows VM. There are several choices:

  1. Create a raw image file.
    Advantages:
    – Easy to implement;
    – Flexible – the file can grow with your requirements;
    – Snapshots;
    – Easy migration;
    – Good performance.
    Disadvantages:
    – Takes up the entire space you specify.
  2. Create a dedicated LVM volume.
    Advantages:
    – Familiar technology (at least to me);
    – Excellent performance, like bare-metal;
    – Flexible – you can add physical drives to increase the volume size;
    – Snapshots;
    – Mountable within Linux host using kpartx.
    Disadvantages:
    – Takes up the entire space specified;
    – Migration isn’t that easy.
  3. Pass through a PCI SATA controller / disk.
    Advantages:
    – Excellent performance, using original Windows disk drivers;
    – Allows the use of Windows virtual drive features;
    – Can use an existing bare-metal installation of Windows in a VM;
    – Possibility to boot Windows directly, i.e. not as VM;
    – Possible to add more drives.
    Disadvantages:
    – The PC needs at least two discrete SATA controllers;
    – Host has no access to disk while VM is running;
    – Requires a dedicated SATA controller and drive(s);
    – SATA controller must have its own IOMMU group;
    – Possible conflicts in Windows between bare-metal and VM operation.

For further information on these and other image options, see here: https://en.wikibooks.org/wiki/QEMU/Images

Although I’m using an LVM volume, I suggest you start with the raw image. Let’s create a raw disk image:

qemu-img create -f raw -o preallocation=full /media/user/win.img 100G

for performance, or simply: fallocate -l 100G /media/user/win.img
Note: Adjust size (100G) and path to match your needs or resources.

See also my post on Tuning VM disk performance.

Part 9 – Check Configuration

It’s best to check that we got everything:

KVM: kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

KVM module: lsmod | grep kvm
kvm_intel 200704 0
kvm 593920 1 kvm_intel
irqbypass 16384 2 kvm,vfio_pci

Above is the output for the Intel module.

VFIO: lsmod | grep vfio
vfio_pci 45056 0
vfio_virqfd 16384 1 vfio_pci
irqbypass 16384 2 kvm,vfio_pci
vfio_iommu_type1 24576 0
vfio 32768 2 vfio_iommu_type1,vfio_pci

QEMU: qemu-system-x86_64 --version
You need QEMU emulator version 2.5.0 or newer. On Linux Mint 19 / Ubuntu 18.04 the QEMU version is 2.11.

Did vfio load and bind to the graphics card?
lspci -kn | grep -A 2 02:00
where 02:00 is the bus number of the graphics card to pass to Windows. Here the output on my PC:
02:00.0 0300: 10de:13c2 (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci
02:00.1 0403: 10de:0fbb (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci

Kernel driver in use is vfio-pci. It worked!

Interrupt remapping: dmesg | grep VFIO
[ 3.288843] VFIO – User Level meta-driver version: 0.3
All good!

If you get this message:
vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param “allow_unsafe_interrupts” to enable VFIO IOMMU support on this platform
enter the following command in a root terminal (or use sudo -i):
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio_iommu_type1.conf
followed by:
update-initramfs -u
In this case you need to reboot once more.

Part 10 – Create Script to Start Windows

To create and start the Windows VM, copy the script below and safe it as windows10vm.sh (or whatever name you like, just keep the .sh extension):


#!/bin/bash

vmname="windows10vm"

if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
echo "A passthrough VM is already running." &
exit 1

else

# use pulseaudio
export QEMU_AUDIO_DRV=pa
export QEMU_PA_SAMPLES=8192
export QEMU_AUDIO_TIMER_PERIOD=99
export QEMU_PA_SERVER=/run/user/1000/pulse/native

cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd

qemu-system-x86_64 \
-name $vmname,process=$vmname \
-machine type=q35,accel=kvm \
-cpu host,kvm=off \
-smp 4,sockets=1,cores=2,threads=2 \
-m 8G \
-balloon none \
-rtc clock=host,base=localtime \
-vga none \
-nographic \
-serial none \
-parallel none \
-soundhw hda \
-usb \
-deviceusb-host,vendorid=0x045e,productid=0x076c \
-device usb-host,vendorid=0x045e,productid=0x0750 \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=dc \
-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img \
-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

exit 0
fi


Make the file executable:
sudo chmod +x windows10vm.sh

You need to edit the file and change the settings and paths to match your CPU and configuration. See below for explanations on the qemu-system-x86 options:

-name $vmname,process=$vmname
Name and process name of the VM. The process name is displayed when using ps -A to show all processes, and used in the script to determine if the VM is already running. Don’t use win10 as process name, for some inexplicable reason it doesn’t work!

-machine type=q35,accel=kvm
This specifies a machine to emulate. The accel=kvm option tells qemu to use the KVM acceleration – without it the Windows guest will run in qemu emulation mode, that is it’ll run real slow.
I have chosen the type=q35 option, as it improved my SSD read and write speeds. See https://wiki.archlinux.org/index.php/QEMU#Virtual_machine_runs_too_slowly. In some cases type=q35 will prevent you from installing Windows, instead you may need to use type=pc,accel=kvm. See the post here. To see all options for type=…, enter the following command:
qemu-system-x86_64 -machine help
Important: Several users passing through Radeon RX 480 and Radeon RX 470 cards have reported reboot loops after updating and installing the Radeon drivers. If you pass through a Radeon graphics card, it is better to replace the -machine line in the startup script with the following line:
-machine type=pc,accel=kvm
to use the default i440fx emulation.
Note for IGD users: If you have an Intel CPU with internal graphics (IGD), and want to use the Intel IGD for Windows, there is a new option to enable passthrough:
igd-passthru=on|off controls IGD GFX passthrough support (default=off).
In most cases you will want to use a discrete graphics card with Windows.

-cpu host,kvm=off
This tells qemu to emulate the host’s exact CPU. There are more options, but it’s best to stay with host.
The kvm=off option is only needed for Nvidia graphics cards – if you have an AMD/Radeon card for your Windows guest, you can remove that option and specify -cpu host.

-smp 4,sockets=1,cores=2,threads=2
This specifies multiprocessing. -smp 4 tells the system to use 4 (virtual) processors. My CPU has 6 cores, each supporting 2 threads, which makes a total of 12 threads. It’s probably best not to assign all CPU resources to the Windows VM – the host also needs some resources (remember that some of the processing and I/O coming from the guest takes up CPU resources in the host). In the above example I gave Windows 4 virtual processors. sockets=1 specifies the number of actual CPU sockets qemu should assign, cores=2 tells qemu to assign 2 processor cores to the VM, and threads=2 specifies 2 threads per core. It may be enough to simply specify -smp 4, but I’m not sure about the performance consequences (if any).
If you have a 4-core Intel CPU with hyper-threading, you can specify -smp 6,sockets=1,cores=3,threads=2 to assign 75% of your CPU resources to the Windows VM. This should usually be enough even for demanding games and applications.
Note: If your CPU doesn’t support hyper-threading, specify threads=1.

-m 8G
The -m option assigns memory (RAM) to the VM, in this case 8 GByte. Same as -m 8192. You can increase or decrease it, depending on your resources and needs. With modern Windows releases it doesn’t make sense to give it less than 4G, unless you are really stretched with RAM. If you use hugepages, make sure your hugepage size matches this!

-mem-path /dev/hugepages
This tells qemu where to find the hugepages we reserved. If you haven’t configured hugepages, you need to remove this option.

-mem-prealloc
Preallocates the memory we assigned to the VM.

-balloon none
We don’t want memory ballooning.

-rtc clock=host,base=localtime
-rtc clock=host tells qemu to use the host clock for synchronization. base=localtime allows the Windows guest to use the local time from the host system. Another option is utc.

-vga none
Disables the built in graphics card emulation. You can remove this option for debugging.

-nographic
Totally disables SDL graphical output. For debugging purposes, remove this option if you don’t get to the Tiano Core screen.

-serial none
-parallel none
Disable serial and parallel interfaces. Who needs them anyway?

-soundhw hda
Together with the export QEMU_AUDIO_DRV=pa shell command, this option enables sound through PulseAudio.
If you want to pass through a physical audio card or audio device and stream audio from your Linux host to your Windows guest, see here: Streaming Audio from Linux to Windows.

-usb
-device usb-host,vendorid=0x045e,productid=0x076c
-device usb-host,vendorid=0x045e,productid=0x0750
-usb enables USB support and -device usb-host… assigns the USB host devices mouse (045e:076c) and keyboard (045e:0750) to the guest. Replace the device IDs with the ones you found using the lsusb command in Part 3 above!
Note the new syntax. There are also many more options that you can find here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.
There are three options to assign host devices to guests. Here the syntax:
-usb \
-device usb-kbd \
-device usb-mouse \
passes through the keyboard and mouse to the VM. When using this option, remove the -vga none and -nographic options from the script to enable switching back and forth between Windows VM and Linux host using CTRL+ALT.
-usb \
-device usb-host,hostbus=bus,hostaddr=addr \

passes through the host device identified by bus and addr.
-usb \
-device usb-host,vendorid=vendor,productid=product \

passes through the host device identified by vendor and product ID

-device vfio-pci,host=02:00.0,multifunction=on
-device vfio-pci,host=02:00.1
Here we specify the graphics card to pass through to the guest, using vfio-pci. Fill in the PCI IDs you found under Part 3 above. It is a multifunction device (graphics and sound). Make sure to pass through both the video and the sound part (02:00.0 and 02:00.1 in my case).

-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd
Specifies the location and format of the bootable OVMF UEFI file. This file doesn’t contain the variables, which are loaded separately (see right below).

-drive if=pflash,format=raw,file=/tmp/my_vars.fd
These are the variables for the UEFI boot file, which were copied by the script to /tmp/my_vars.fd.

-boot order=dc
Start boot from CD (d), then first hard disk (c). After installation of Windows you can remove the “d” to boot straight from disk.

-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img
Defines the first hard disk. With the options above it will be accessed as a paravirtualized (if=virtio) drive in raw format (format=raw).
Important: file=/… enter the path to your previously created win.img file.
Other possible drive options are file=/dev/mapper/group-vol for LVM volumes, or file=/dev/sdx1 for entire disks or partitions.
For some basic -drive options, see my post here. For the new Qemu syntax and drive performance tuning, see Tuning VM Disk Performance.

-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
This attaches the Windows win10.iso as CD or DVD. The driver used is the ide-cd driver.
Important: file=/… enter the path to your Windows ISO image.
Note: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
This attaches the virtio ISO image as CD. Note the different index.
Important: file=/… enter the path to your virtio ISO image. If you downloaded it to the default location, it should be in your Downloads directory.
Note 1: There are many ways to attach ISO images or drives and invoke drivers. My system didn’t want to take a second scsi-cd device, so this option did the job. Unless this doesn’t work for you, don’t change.
Note 2: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
Defines the network interface and network driver. It’s best to define a MAC address, here 00:16:3e:00:01:01. The MAC is specified in Hex and you can change the last :01:01 to your liking. Make sure no two MAC addresses are the same!
vhost=on is optional – some people reported problems with this option. It is for network performance improvement.
For more information: https://wiki.archlinux.org/index.php/QEMU#Network and https://wiki.archlinux.org/index.php/QEMU#Networking.

Important: Documentation on the installed QEMU can be found here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.

For syntax changes in newer versions, see https://wiki.qemu.org/Features/RemovedFeatures.

Linux Mint 19.1 and Ubuntu 18.04 come with QEMU 2.11, Ubuntu 18.10 with 2.12. Ubuntu 19.04 uses QEMU 3.1, which is also the latest stable version. For additional documentation on QEMU, see https://www.qemu.org/documentation/. Some configuration examples can be found in the following directory:
/usr/share/doc/qemu-system-common/config

Part 11 – Install Windows

Start the VM by running the script as root:
sudo ./windows10vm.sh
(Make sure you specify the correct path.)

You should get a Tiano Core splash screen with the memory test result.

You might land in an EFI shell. Type exit and you should be getting a menu. Enter the “Boot Manager” menu, select your boot disk and hit Enter. (See below.)

UEFI shell
UEFI shell (OVMF)
UEFI menu
UEFI menu (OVMF)
UEFI boot manager menu
UEFI boot manager menu (OVMF)

Now the Windows ISO boots and asks you to:
Press any key to start the CD / DVD…

Press a key!

Windows will then ask you to:
Select the driver to install

Click “Browse”, then select your VFIO ISO image and go to “viostor“, open and select your Windows version (w10 for Windows 1o), then select the “AMD64” version for 64 bit systems, click OK. (Note: Instead of the viostor driver, you can also install the vioscsi driver. See qemu documentation for proper syntax in the qemu command.)

Windows will ask for the license key, and you need to specify how to install – choose “Custom”. Then select your drive (there should be only disk0) and install.

Windows may reboot several times. When done rebooting, open Device Manager and select the Network interface. Right-click and select update. Then browse to the VFIO disk and install NetKVM.

Windows should be looking for a display driver by itself. If not, install it manually.

Note: In my case, Windows did not correctly detect my drives being SSD drives. Not only will Windows 10 perform unnecessary disk optimization tasks, but these “optimizations” can actually lead to reduced SSD life and performance issues. To make Windows 10 determine the correct disk drive type, do the following:

1. Inside Windows 10, right-click the Start menu.
2. Select “Command prompt (admin)”.
3. At the command prompt, run:
winsat formal
4. It will run a while and then print the Windows Experience Index (WEI).
5. Please share your WEI in a comment below!

To check that Windows correctly identified your SSD:
1. Open Explorer
2. Click “This PC” in the left tab.
3. Right-click your drive (e.g. C:) and select “Properties”.
4. Select the “Tools” tab.
5. Click “Optimize”
You should see something similar to this:

SSD optimization
Use Optimize Drives to optimize for SSD

In my case, I have drive C: (my Windows 10 system partition) and a “Recovery” partition located on an SSD, the other two partitions (“photos” and “raw_photos”) are using regular hard drives (HDD). Notice the “Optimization not available” 😀 .

Turn off hibernation and suspend ! Having either of them enabled can cause your Windows VM to hang, or may even affect the host. To turn off hibernation and suspend, follow the instructions for hibernation and suspend.

Turn off fast startup ! When you shut down the Windows VM, fast startup leaves the file system in a state that is unmountable by Linux. If something goes wrong, you’re screwed. NEVER EVER let proprietary technology have control over your data. Follow these instructions to turn off fast startup.

By now you should have a working Windows VM with VGA passthrough.

Part 12 – Troubleshooting

Below are a number of common issues when trying to install/run Windows in a VGA passthrough environment.

System Thread Exception Not Handled

When you receive a Windows blue screen with the error:

System Thread Exception Not Handled

enter the following terminal command:

echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf

See also Part 4 above.

No Video Output after GPU Passthrough

A number of people report that they can’t get video output on their passed-through GPU. This is a common issue with low and mid-end AMD Ryzen platforms (unfortunately I don’t own a Ryzen platform, so I can’t verify).

When booting these host platforms, the host UEFI initializes the GPU and makes a somewhat modified “shadow copy” of the GPU’s vBIOS. Later when you start the VM, Linux  exposes this crippled shadow BIOS to the guests UEFI loader. The same happens when you try to passthrough your primary (and only) GPU to the guest. A telltale sign is the following error when running the VM start script:

qemu-system-x86_64: -device vfio-pci,host=02:00.0,multifunction=on: Failed to mmap 0000:02:00.0 BAR 3. Performance may be slow

There are several possible solutions all described in Explaining CSM, efifb=off, and Setting the Boot GPU Manually. The first solution to try is to enable CSM (Compatibility Support Module) in the motherboard BIOS. This may set another GPU as the primary GPU, leaving the passthrough GPU untouched. Run lspci | grep VGA to see that the PCI bus assignments haven’t changed.

If the above CSM solution doesn’t work, the probably best solution is to:

  1. Put the passthrough GPU in a secondary PCI slot;
  2. Temporarily install a graphics card into the primary PCI slot;
  3. Create a BIOS dump file off the passthrough GPU;
  4. Remove the temporary primary GPU, install the passthrough GPU and boot using the romfile=/…/GPU_BIOS.dump file we created.

Turn off the PC and unplug from mains. Remove the passthrough GPU from its slot, place it in another slot, and put in another GPU into the primary GPU slot.

Now turn on the PC, open a terminal window, and enter:
lspci -v | grep VGA

You should see your two VGA cards, for example:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1) (prog-if 00 [VGA controller])
02:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) (prog-if 00 [VGA controller])

Note the PCI bus of the GPU you want to pass through (in the above example: 02:00.0). To unbind this card from the vfio-pci driver (in case it is bound), use these commands:
sudo -i
echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
Should you get an error:  it just means it wasn’t bound to the vfio-pci driver.

Now enter:
cd /sys/bus/pci/devices/0000:02:00.0/
(replace 02:00.0 with your PCI bus number) followed by:
echo 1 > rom

cat rom > /path/to/GPU_BIOS.dump

echo 0 > rom

In case your graphics card was bound to vfio-pci (no error message when unbinding), enter the following:
echo "0000:02:00.0" > /sys/bus/pci/drivers/vfio-pci/bind
(replace 02:00.0 with your PCI bus).

Turn off the PC, disconnect from mains, and replace the temporary GPU with the passthrough GPU.

(Note: Instead of the above procedure, you can download your video BIOS from TechPowerUp and modify the vBIOS file as shown here.)

After booting the host once again, edit the VM start script and add the romfile option to the qemu -device command for your GPU:
-device vfio-pci,host=02:00.0,multifunction=on,romfile=/path/to/GPU_BIOS.dump \

AMD GPU doesn’t reset after VM shutdown

Some AMD graphics cards don’t reset properly after the VM is shutdown. This requires a host reboot to re-run the Windows VM. The following post describes a work-around – use at your own risk!

GPU passthrough reinitialization fix uses a Windows command line utility called devcon64.exe that essentially enables or disables hardware. Two Windows batch files are created, one that is run at Windows startup, the other when Windows shuts down. At the VM startup the graphics card video and audio are enabled, at shutdown they are disabled.

Audio – Crackling Sound

If you experience crackling sound in your VM, you should enable MSI in your Windows 10 VM. See Turn on MSI Message Signaled Interrupts in your VM.

If the above step does not solve the crackling sound issue, have a look at Mathias Hueber’s Virtual machine audio setup – or how to get pulse audio working.

Reddit user spheenik wrote a patch to get rid of the audio crackling. This patch is, to my best knowledge, incorporated in QEMU 3.0 and newer. If the above steps don’t solve the issue, and you are running a QEMU version prior to 3.0, you may want to compile QEMU 3.0 as shown here (please note that I haven’t tried it).

VM not starting – graphics driver

A common issue is the binding of a driver to the graphics card we want to pass through. As I was writing this how-to and made changes to my (previously working) system, I suddenly couldn’t start the VM anymore. The first thing to check if you don’t get a black Tianocore screen is whether or not the graphics card you try to pass through is bound to the vfio-pci driver:
dmesg | grep -i vfio
The output should be similar to this:
[ 2.735931] VFIO – User Level meta-driver version: 0.3
[ 2.757208] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000
[ 2.773223] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000
[ 8.437128] vfio-pci 0000:02:00.0: enabling device (0000 -> 0003)

The above example shows that the graphics card is bound to the vfio-pci driver (see last line), which is what we want. If the command doesn’t produce any output, or a very different one from above, something is wrong. To check further, enter:
lspci -k | grep -i -A 3 vga
Here is what I got when my VM wouldn’t want to start anymore:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
Subsystem: NVIDIA Corporation GF106GL [Quadro 2000]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361

02:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GM204 [GeForce GTX 970]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361

Graphics card 01:00.0 (Quadro 2000) uses the Nvidia driver – just what I want.

Graphics card 02:00.0 (GTX 970) also uses the Nvidia driver – that is NOT what I was hoping for. This card should be bound to the vfio-pci driver. So what do we do?

In Linux Mint, click the menu button, click “Control Center”, then click “Driver Manager” in the Administration section. Enter your password. You will then see the drivers associated with the graphics cards. Change the driver of the graphics card so it will use the open-source driver (in this example “Nouveau”) and press “Apply Changes”. After the change, it should look similar to the photo below:

Driver Manager
Driver Manager in Linux Mint

If above doesn’t help, or if you can’t get rid of the nouveau driver, see if the nvidia-fallback.service is running. If yes, it will load the open-source nouveau driver whenever it can’t find the Nvidia proprietary driver. You need to disable it by running the following command as sudo:

systemctl disable nvidia-fallback.service

BSOD when installing AMD Crimson drivers under Windows

Several users on the Redhat VFIO mailing list have reported problems with the installation of AMD Crimson drivers under Windows. This seems to affect a number of AMD graphics cards, as well as a number of different AMD Crimson driver releases. A workaround is described here: https://www.redhat.com/archives/vfio-users/2016-April/msg00153.html

In this workaround the following line is added to the startup script, right above the definition of the graphics device:
-device ioh3420,bus=pci,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1

Should the above configuration give you a “Bus ‘pci’ not found” error, change the line as follows:
-device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1

Then you change the graphics card passthrough options as follows:
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \

Identical graphics cards for host and guest

If you use two identical graphics cards for both the Linux host and the Windows guest, follow these instructions:

Modify the /etc/modprobe.d/local.conf file as follows:

install vfio-pci /sbin/vfio-pci-override-vga.sh

Create a /sbin/vfio-pci-override-vga.sh file with the following content:

#!/bin/sh

DEVS="0000:02:00.0 0000:02:00.1"

for DEV in $DEVS; do
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done

modprobe -i vfio-pci

Make the vfio-pci-override-vga.sh file executable:

chmod u+x /sbin/vfio-pci-override-vga.sh

Windows ISO won’t boot – 1

If you can’t start the Windows ISO, it may be necessary to run a more recent version of Qemu to get features or work-arounds that solve problems. If you require a more updated version of Qemu (version 2.12 as of this update), add the following PPA (warning: this is not an official repository – use at your own risk). At the terminal prompt, enter:
sudo add-apt-repository ppa:jacob/virtualisation

The latest stable QEMU version as of March 15, 2018 is QEMU 3.1. See also above under “Audio – Crackling Sound“.

Windows ISO won’t boot – 2

Sometimes the OVMF BIOS files from the official Ubuntu repository don’t work with your hardware and the VM won’t boot. In that case you can download alternative OVMF files from here: http://www.ubuntuupdates.org/package/core/wily/multiverse/base/ovmf, or get the most updated version from here:
https://www.kraxel.org/repos/jenkins/edk2/
Download the latest edk2.git-ovmf-x64 file, as of this update it is “edk2.git-ovmf-x64-0-20180807.221.g1aa9314e3a.noarch.rpm” for a 64bit installation. Open the downloaded .rpm file with root privileges and unpack to /.
Copy the following files:
sudo cp /usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd /usr/share/ovmf/OVMF.fd

sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd /usr/share/OVMF/OVMF_CODE.fd

sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /usr/share/OVMF/OVMF_VARS.fd

Check to see if the /usr/share/ovmf/OVMF.fd link exists, if not, create it:
sudo ln -s '/usr/share/ovmf/OVMF.fd' '/usr/share/qemu/OVMF.fd'

Windows ISO won’t boot – 3

Sometimes the Windows ISO image is corrupted or simply an old version that doesn’t work with passthrough. Go to https://www.microsoft.com/en-us/software-download/windows10ISO and download the ISO you need (see your software license). Then try again.

Motherboard BIOS bugs

Some motherboard BIOSes have bugs and prevent passthrough. Use “dmesg” and look for entries like these:
[ 0.297481] [Firmware Bug]: AMD-Vi: IOAPIC[7] not in IVRS table
[ 0.297485] [Firmware Bug]: AMD-Vi: IOAPIC[8] not in IVRS table
[ 0.297487] [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found in IVRS table
[ 0.297490] AMD-Vi: Disabling interrupt remapping due to BIOS Bug(s)
If you find entries that point to a faulty BIOS or problems with interrupt remapping, go to Easy solution to get IOMMU working on mobos with broken BIOSes. (All credits go to leonmaxx on the Ubuntu forum!)

Intel IGD and arbitration bug

For users of Intel CPUs with IGD (Intel graphics device): The Intel i915 driver has a bug, which has necessitated a kernel patch named i915 vga arbiter patch. According to developer Alex Williamson, this patch is needed any time you have host Intel graphics and make use of the x-vga=on option. This tutorial, however, does NOT use the x-vga option; the tutorial is based on UEFI boot and doesn’t use VGA. That means you do NOT need the i915 vga arbiter patch! See http://vfio.blogspot.com/2014/08/primary-graphics-assignment-without-vga.html.

In some case you may need to stop the i915 driver from loading by adding nomodeset to the following line in /etc/default/grub as shown:
GRUB_CMDLINE_LINUX_DEFAULT=”quiet nomodeset intel_iommu=on”

Then run:
sudo update-grub

nomodeset prevents the kernel from loading video drivers and tells it to use BIOS modes instead, until X is loaded.

IOMMU group contains additional devices

When checking the IOMMU groups, your graphics card’ video and audio part should be the only 2 entries under the respective IOMMU group. The same goes for any other PCI device you want to pass through, as you must pass through all devices within an IOMMU group, or nothing. If – aside from the PCI device(s) you wish to pass to the guest – there are other devices within the same IOMMU group, see What if there are other devices in my IOMMU group for a solution.

Dual-graphics laptops (e.g. Optimus technology)

Misairu_G (username on forum) published a Guide to VGA passthrough on Optimus laptops. You may want to consult that guide if you use a laptop with Nvidia graphics.

User bash64 on the Linux Mint forum has reported success with only minor modifications to this tutorial. The main deviations were:

  1. The nomodeset option in the /etc/default/grub file (see “Intel IGD and arbitration bug” above)
  2. Seabios instead of UEFI / ovmf
  3. Minor modifications to the qemu script file

Issues with Skylake CPUs

Another issue has come up with Intel Skylake CPUs. This problem is likely solved by now. Update to a recent kernel (e.g. 4.18), as described above.

In case the kernel upgrade doesn’t solve the issue, see https://lkml.org/lkml/2016/3/31/1112 for an available patch. Another possible solution can be found here: https://teksyndicate.com/2015/09/13/wendells-skylake-pc-build-i7-6700k/.

Issues with AMD Threadripper CPUs

For some time the AMD Threadripper family of CPUs had been plagued with a bug that prevented proper GPU passthrough. The issue is described in this Reddit thread. Motherboard manufacturers have recently issued BIOS updates that solve the problem. Install the latest motherboard BIOS update.

AMD Ryzen freeze

Aside from poor IOMMU groupings which may be solved by upgrading the BIOS, AMD Ryzen CPUs have also been reported to freeze occasionally. This video shows how to fix it with a simple grub startup option:
rcu_nocbs=0-7

where 0-7 is the total number of threads your Ryzen CPU has. See also What the Linux rcu_nocbs kernel argument does (and my Ryzen issues again) and Fix Ryzen lockups related to low system usage for more information.

qemu: hardware error: vfio: DMA mapping failed, unable to continue

When running the start script for the VM, the VM crashes with the error above. You may have to unplug and replug the mouse and keyboard USB cables to regain control over the PC. This happens when the user’s locked memory limit is too small.

Open a terminal and enter the following:
ulimit -a | grep locked

If you get
max locked memory (kbytes, -l) 16384
your locked memory limit is too low. See Run Windows VM in user mode (non-root) below for how to increase the locked memory.

No Solution Found

If you haven’t found a solution to your problem, check the References. You are also welcome to leave a comment and I or someone else will try to help.

Part 13 – Run Windows VM in user mode (non-root)

To run your Windows VM in user mode has become easy.

  1. Add your user to the kvm group:
    sudo usermod -a -G kvm myusername
    Note: Always replace “myusername” with your user name.
  2. Reboot (or logout and login) to see your user in the kvm group.
  3. If you use hugepages, make sure they are properly configured. Open your /etc/fstab file and compare your hugetlbfs configuration with the following:
    hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0
    Note that the gid=129 might be different for your system. The rest should be identical!
    Now enter the following command:
    getent group kvm
    This should return something like:
    kvm:x:129:myusername
    The group ID (gid) number matches. If not, edit the fstab file to have the gid= entry match the gid number you got using getent.
  4. Edit the file /etc/security/limits.conf and add the following lines:
    @kvm soft memlock unlimited
    @kvm hard memlock unlimited
  5. Edit the file /etc/security/limits.d/hugepages.conf and add the following lines (if not present):
    @hugepages hard memlock unlimited
    @hugepages soft memlock unlimited
  6. Edit the Windows VM start script and add the following line below the entry “cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd“:
    chown myusername:kvm /tmp/my_vars.fd
    Next add the following entry right under the qemu-system-x86_64 \ entry, on a separate line:
    -runas myusername \
    Save the file and start your Windows VM. You will still need sudo to run the script, since it performs some privileged tasks, but the guest will run in user mode with your user privileges.
  7. After booting into Windows, switch to the Linux host and run in a terminal:
    top
    Your output should be similar to this:

    kvm Windows VM
    top shows Windows 10 VM running with user privileges

Notice the win10vm entry associated with my user name “heiko” instead of “root”.

Please report if you encounter problems (use comment section below).

For other references, see the following tutorial: https://www.evonide.com/non-root-gpu-passthrough-setup/.

Part 14 – Passing more PCI devices to guest

If you wish to pass additional PCI devices through to your Windows guest, you must make sure that you pass through all PCI devices residing under the same IOMMU group. Moreover, DO NOT PASS root devices to your guest. To check which PCI devices resider under the same group, use the following command:
find /sys/kernel/iommu_groups/ -type l
The output on my system is:
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.2
/sys/kernel/iommu_groups/4/devices/0000:00:05.4
/sys/kernel/iommu_groups/5/devices/0000:00:11.0
/sys/kernel/iommu_groups/6/devices/0000:00:16.0
/sys/kernel/iommu_groups/7/devices/0000:00:19.0
/sys/kernel/iommu_groups/8/devices/0000:00:1a.0
/sys/kernel/iommu_groups/9/devices/0000:00:1c.0
/sys/kernel/iommu_groups/10/devices/0000:00:1c.1
/sys/kernel/iommu_groups/11/devices/0000:00:1c.2
/sys/kernel/iommu_groups/12/devices/0000:00:1c.3
/sys/kernel/iommu_groups/13/devices/0000:00:1c.4
/sys/kernel/iommu_groups/14/devices/0000:00:1c.7
/sys/kernel/iommu_groups/15/devices/0000:00:1d.0
/sys/kernel/iommu_groups/16/devices/0000:00:1e.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.2
/sys/kernel/iommu_groups/17/devices/0000:00:1f.3
/sys/kernel/iommu_groups/18/devices/0000:01:00.0
/sys/kernel/iommu_groups/18/devices/0000:01:00.1
/sys/kernel/iommu_groups/19/devices/0000:02:00.0
/sys/kernel/iommu_groups/19/devices/0000:02:00.1
/sys/kernel/iommu_groups/20/devices/0000:05:00.0
/sys/kernel/iommu_groups/20/devices/0000:06:04.0

As you can see in the above list, some IOMMU groups contain multiple devices on the PCI bus. I wanted to see which devices are in IOMMU group 17 and used the PCI bus ID:
lspci -nn | grep 00:1f.
Here is what I got:
00:1f.0 ISA bridge [0601]: Intel Corporation C600/X79 series chipset LPC Controller [8086:1d41] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller [8086:1d02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation C600/X79 series chipset SMBus Host Controller [8086:1d22] (rev 05)

All of the listed devices are used by my Linux host:
– The ISA bridge is a standard device used by the host. You do not pass it through to a guest!
– All my drives are controlled by the host, so passing through a SATA controller would be a very bad idea!
– Do NOT pass through a host controller, such as the C600/X79 series chipset SMBus Host Controller!

In order to pass through individual PCI devices, edit the VM startup script and insert the following code underneath the vmname=… line:
configfile=/etc/vfio-pci.cfg

vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

Underneath the line containing “else”, insert:

cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done

You need to create a vfio-pci.cfg file in /etc containing the PCI bus numbers as follows:
0000:00:1a.0
0000:08:00.0
Make sure the file does NOT contain any blank line(s). Replace the PCI IDs with the ones you found!

Part 15 – References

For documentation on qemu/kvm, see the following directory on your Linux machine: /usr/share/doc/qemu-system-common

https://passthroughpo.st/ – a new “online news publication with a razor focus on virtualization and linux gaming, as well as developments in open source technology”

https://www.reddit.com/r/VFIO/ – the Reddit r/VFIO subreddit to discuss all things related to VFIO and gaming on virtual machines in general

https://davidyat.es/2016/09/08/gpu-passthrough/ – a well written tutorial offering qemu script and virt-manager as options

https://blog.zerosector.io/2018/07/28/kvm-qemu-windows-10-gpu-passthrough/ – a straightforward tutorial for Ubuntu 18.04 using virt-manager

http://mathiashueber.com/amd-ryzen-based-passthrough-setup-between-xubuntu-16-04-and-windows-10/ – nice and detailed tutorial for a Ryzen based system using the Virtual Machine Manager GUI

https://ycnrg.org/vga-passthrough-with-ovmf-vfio/ – a Ubuntu 16.04 tutorial using virt-manager.

https://qemu.weilnetz.de/doc/qemu-doc.html – QEMU user manual

https://bbs.archlinux.org/viewtopic.php?id=162768 – this gave me the inspiration – used to be the best thread on kvm, now somewhat outdated! See the VFIO Reddit group above.

http://ubuntuforums.org/showthread.php?t=2266916 – Ubuntu tutorial.

https://wiki.archlinux.org/index.php/QEMU – Arch Linux documentation on QEMU – by far the best.

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF – PCI passthrough via OVMF tutorial for Arch Linux – provides excellent information.

https://aur.archlinux.org/cgit/aur.git/tree/?h=linux-vfio – a source for the ACS and i915 arbiter patches.

http://vfio.blogspot.com/2014/08/vfiovga-faq.html – one of the developers, Alex provides invaluable information and advice.

http://vfio.blogspot.com/2014/08/primary-graphics-assignment-without-vga.html

http://www.linux-kvm.org/page/Tuning_KVM – Redhat is the key developer of kvm, their website has lots of information, but is often a little outdated.

https://wiki.archlinux.org/index.php/KVM – Arch Linux KVM page.

https://www.suse.com/documentation/sles11/book_kvm/data/part_2_book_book_kvm.html – Suse Linux documentation on KVM – good reference.

https://www.evonide.com/non-root-gpu-passthrough-setup/ – haven’t tried it, but looks like a good tutorial.

https://forum.level1techs.com/t/gta-v-on-linux-skylake-build-hardware-vm-passthrough/87440 – tutorial with Youtube video to go along, very useful and up-to-date, including how to apply ACS override patch.

https://gitlab.com/YuriAlek/vfio – single GPU passthrough with QEMU and VFIO.

https://libvirt.org/format.html and https://libvirt.org/formatdomain.html – if you want to play with virt-manager, you’ll need to dabble in libvirt.

Below is the VM startup script I use, for reference only.
Note: The script is specific for my hardware. Don’t use it without modifying it!

#!/bin/bash

configfile=/etc/vfio-pci.cfg
vmname="win10vm"

vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
zenity --info --window-icon=info --timeout=15 --text="A VM is already running." &
exit 1

else

#modprobe vfio-pci

cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done

# use pulseaudio
#export QEMU_AUDIO_DRV=pa
#export QEMU_PA_SAMPLES=8192
#export QEMU_AUDIO_TIMER_PERIOD=100
#export QEMU_PA_SERVER=/run/user/1000/pulse/native
#export QEMU_PA_SINK=alsa_output.pci-0000_06_04.0.analog-stereo
#export QEMU_PA_SOURCE=input

#use ALSA
export QEMU_AUDIO_DRV=alsa
export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256
export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256
export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16
export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1
export QEMU_AUDIO_TIMER_PERIOD=50

cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
chown heiko:kvm /tmp/my_vars.fd

#taskset -c 0-9
qemu-system-x86_64 \
-runas heiko \
-monitor stdio \
-serial none \
-parallel none \
-nodefaults \
-nodefconfig \
-name $vmname,process=$vmname \
-machine q35,accel=kvm,kernel_irqchip=on \
-cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
-smp 12,sockets=1,cores=6,threads=2 \
-m 16G \
-mem-path /dev/hugepages \
-mem-prealloc \
-balloon none \
-rtc base=localtime,clock=host \
-soundhw hda \
-vga none \
-nographic \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-device vfio-pci,host=00:1a.0 \
-device vfio-pci,host=08:00.0 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=c \
-object iothread,id=io1 \
-device virtio-blk-pci,drive=disk0,iothread=io1 \
-drive if=none,id=disk0,cache=none,format=raw,aio=threads,file=/dev/mapper/lm13-win10 \
-device virtio-blk-pci,drive=disk1,iothread=io1 \
-drive if=none,id=disk1,cache=none,format=raw,aio=native,file=/dev/mapper/photos-photo_stripe \
-device virtio-blk-pci,drive=disk2,iothread=io1 \
-drive if=none,id=disk2,cache=none,format=raw,aio=native,file=/dev/mapper/media-photo_raw \
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

#EOF

#-drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/virtio-win-0.1.140.iso,index=3,media=cdrom \
#-drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/Win10_1803_English_x64.iso,index=4,media=cdrom \

#-global ide-drive.physical_block_size=4096 \
#-global ide-drive.logical_block_size=4096 \
#-global virtio-blk-pci.physical_block_size=512 \
#-global virtio-blk-pci.logical_block_size=512 \

exit 0
fi

 

The command
taskset -c 0-9 qemu-system-x86_64...

pins the vCPUs of the guest to processor threads 0-9 (I have a 6-core CPU with 2 threads per core=12 threads). Here I assign 10 out of 12 threads to the guest. While the guest is running, the host will have to make due with only 1 core (2 threads). CPU pinning may improve performance of the guest.

Note: I am currently passing through all cores and threads, without CPU pinning. This seems to give me the best results in the benchmarks, as well as real-life performance.

Part 16 – Related Posts

Here a list of related posts:

Developments in Virtualization

Virtual Machines on Userbenchmark

qemu-system-x86_64 Drive Options

Part 17 – Benchmarks

I have a separate post showing Passmark benchmarks of my system.

Here the UserBenchmark results for my configuration:

UserBenchmarks: Game 60%, Desk 71%, Work 64%
CPU: Intel Core i7-3930K – 79.7%
GPU: Nvidia GTX 970 – 60.4%
SSD: Red Hat VirtIO 140GB – 74.6%
HDD: Red Hat VirtIO 2TB – 64.6%
HDD: Red Hat VirtIO 2TB – 66.1%
RAM: QEMU 20GB – 98.2%
MBD: QEMU Standard PC (Q35 + ICH9, 2009)

Part 18 – Performance Tuning

I keep updating this chapter, so expect more tips to be added here in the future.

Enable Hyper-V Enlightenments

As funny as this sounds, this is another way to improve Windows performance under kvm. Hyper-V enlightenments are easy to implement: In the script that starts the VM, change the following line:
-cpu host,kvm=off \

to:
-cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \

The above is one line! To check that it actually works, start your Windows VM and switch to Linux. Open a terminal window and enter (in one line):
ps -aux | grep qemu | grep -Eo 'hv_relaxed|hv_spinlocks=0x1fff|hv_vapic|hv_time'

You should get the following output:
hv_vapic
hv_time
hv_relaxed
hv_spinlocks=0x1fff

For more on Hyper-V enlightenments, see here.

Configure hugepages

This step is not required to run the Windows VM, but helps improve performance. First we need to decide how much memory we want to give to Windows. Here my suggestions:

  1. No less than 4GB. Use 8GB or more for a Windows gaming VM.
  2. If you got 16GB total and aren’t running multiple VMs, give Windows 8GB-12GB, depending on what you plan to do with Windows.

For this tutorial I use 8GB. Hugepages are enabled by default in the latest releases of Linux Mint (since 18) and Ubuntu (since 16.04). For more information or if you are running an older release, see KVM – Using Hugepages.

Let’s see what we got:
hugeadm --explain

Total System Memory: 24108 MB

Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M

Huge page pools:
Size Minimum Current Maximum Default
2097152 0 0 0 *
1073741824 0 0 0

As you can see, hugepages are mounted to /dev/hugepages, and the default hugepage size is 2097152 Bytes/(1024*1024)=2MB.

Another way to get information about hugepages:
grep "Huge" /proc/meminfo

AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Here some math:
We want to reserve 8GB for Windows:
8GB = 8x1024MB = 8192MB

Our hugepage size is 2MB, so we need to reserve:
8192MB/2MB = 4096 hugepages

We configure the hugepage pool during boot. Open the file /etc/default/grub as root:
xed admin:///etc/default/grub

Look for the GRUB_CMDLINE_LINUX_DEFAULT=”…” line we edited before and add:
hugepages=4096

This is what I have:
GRUB_CMDLINE_LINUX_DEFAULT=”modprobe.blacklist=nouveau quiet intel_iommu=on hugepages=4096″

Save and close. Then run:
sudo update-grub

Now reboot for our hugepages configuration to take effect.

After the reboot, run in a terminal:
hugeadm --explain

Total System Memory: 24108 MB

Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M

Huge page pools:
Size Minimum Current Maximum Default
2097152 4096 4096 4096 *
1073741824 0 0 0

Huge page sizes with configured pools:
2097152

The /proc/sys/vm/min_free_kbytes of 67584 is too small. To maximiuse efficiency of fragmentation avoidance, there should be at least one huge page free per zone in the system which minimally requires a min_free_kbytes value of 112640

A /proc/sys/kernel/shmmax value of 17179869184 bytes may be sub-optimal. To maximise shared memory usage, this should be set to the size of the largest shared memory segment size you want to be able to use. Alternatively, set it to a size matching the maximum possible allocation size of all huge pages. This can be done automatically, using the –set-recommended-shmmax option.

The recommended shmmax for your currently allocated huge pages is 8589934592 bytes.
To make shmmax settings persistent, add the following line to /etc/sysctl.conf:
kernel.shmmax = 8589934592

To make your hugetlb_shm_group settings persistent, add the following line to /etc/sysctl.conf:
vm.hugetlb_shm_group = 129

Note: Permanent swap space should be preferred when dynamic huge page pools are used.

Note the sub-optimal shmmax value. We fix it permanently by editing /etc/sysctl.conf:
xed admin:///etc/sysctl.conf

and adding the following lines:
kernel.shmmax = 8589934592
vm.hugetlb_shm_group = 129
min_free_kbytes = 112640

Note 1: Use the values recommended by hugeadm –explain!!!

Regarding vm.hugetlb_shm_group = 129: “129” is the GID of the group kvm. Check with:
getent group kvm

Run sudo sysctl -p to put the new settings into effect. Then edit the /etc/fstab file to configure the hugepages mount point with permissions  and group ID (GID):
xed admin:///etc/fstab

Add the following line to the end of the file and save:
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0

It’s best to add your user to the kvm group (GID 129), so you’ll have permissions to access the hugepages:
usermod -a -G kvm user
where “user” is your user name.

Check the results with hugeadm --explain.

Now we need to edit the windows10vm.sh script file that contains the qemu command and add the following lines under -m 8G \:
-mem-path /dev/hugepages \
-mem-prealloc \

Reboot the PC for the fstab changes to take effect.

Turn on MSI Message Signaled Interrupts in your VM

Developer Alex Williamson argues that MSI Message Signaled Interrupts may provide a more efficient way to handle interrupts. A detailed description on how to turn on MSI in a Windows VM can be found here: Line-Based vs. Message Signaled-Based Interrupts.

Make sure to backup your entire Windows installation, or at least define a restore point for Windows.

In my case it improved sound quality (no more crackle), others have reported similar results – see these comments.

Important: With every major Windows 10 update (e.g. 1803 to 1809), Microsoft’s gifted software engineers manage to reverse your MSI settings. So after an update, you have to do this step all over again. 😡

Tuning VM Disk Performance

I’ve written a separate post on tuning VM disk performance. Under kvm, disk performance tuning can offer a dramatic read/write speed boost. My post describes different scenarios and which configuration might work best. As with every tuning step, take benchmarks to verify it actually works for you!

Low 2D Graphics Performance

Recent Windows updates have added protection against the Spectre vulnerability, by means of an Intel microcode update. This update has caused a significantdrop in 2D graphics performance under Windows.

The reason for the performance drop might be a bug in kvm/qemu, and the work-around described in my separate post should only be used in emergencies, if and when 2D graphics performance is essential to the applications you run.

SR-IOV and IOMMU Pass Through

Some devices support a feature called SR-IOV or Single Root Input/Output Virtualisation. This allows multiple virtual machines to access PCIe hardware  using virtual functions (vf), thus improving performance. See Understanding the iommu Linux grub File Configuration.

The SR-IOV feature needs to be enabled in the BIOS, as well as the drivers. See here for an example.

In some cases performance can be further improved by adding the “pass through” option iommu=pt to the /etc/default/grub file:

GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on iommu=pt”

Followed by sudo update-grub

CPU pinning

Many modern CPUs offer hardware multitasking, known as “hyper-threading” in Intel jargon or “SMT” in AMD talk. Those CPUs run two threads on each core, switching between the threads in an efficient way. It’s almost like having twice the number of cores, but not quite so. Each core can only process one thread at a time. Hyper-threading still helps CPU performance because our PC needs to run multiple tasks simultaneously, and task-switching goes faster with hardware support.

Some tasks, however, can be negatively affected by this switching back and forth. One example are high-speed input-output (IO) tasks. Linux enables us to dedicate (“pin”) a core to such tasks, so that the task won’t have to share CPU resources with other tasks.

To discover the CPU topology on your PC, use the following command:

lscpu -e

The output on my Intel i7 3930K CPU is this:

CPUNODESOCKETCOREL1d:L1i:L2:L3ONLINEMAXMHZMINMHZ
00000:0:0:0yes5700.00001200.0000
10011:1:1:0yes5700.00001200.0000
20022:2:2:0yes5700.00001200.0000
30033:3:3:0yes5700.00001200.0000
40044:4:4:0yes5700.00001200.0000
50055:5:5:0yes5700.00001200.0000
60000:0:0:0yes5700.00001200.0000
70011:1:1:0yes5700.00001200.0000
80022:2:2:0yes5700.00001200.0000
90033:3:3:0yes5700.00001200.0000
100044:4:4:0yes5700.00001200.0000
110055:5:5:0yes5700.00001200.0000

*Note: My CPU is overclocked, hence the high MAXMHZ value.

Note column 4 “Core”: It shows which core the vCPU (column 1 – CPU) is actually using. With this Intel processor, vCPU 0 and 6 are sharing core 0.

The performance gain (or loss) of CPU pinning depends on your hardware and on what you are doing with the Windows VM. A good benchmark of different tasks can be found here: CPU Pinning Benchmarks.

Some users report that CPU pinning helped improve latency, but sometimes at the cost of performance. Here is another useful post: Best pinning strategy for latency / performance trade-off.

A good explanation on CPU pinning and other performance improvements can be found here: Performance tuning.

On my PC I do NOT use CPU pinning. It is tricky at best, and whatever I tried it did not improve but rather reduce performance. Important: The effects of CPU pinning are highly individual and depend on what you want to achieve. For a gaming VM, it might help improve performance (see CPU Pinning Benchmarks).

Kernel 4.17 and Qemu 3.0 Improvements

If you encounter frame drops, latency issues, or have issues with your VR, check your interrupts:

watch -n1 "cat /proc/interrupts"

If you see high RES numbers (like in the millions), see the Interrupt tuning and issue with high rescheduling interrupt counts post. If you follow it, you’ll see that upgrading the kernel to 4.17 and using Qemu 3.0 may help. How to set up QEMU 3.0 on Ubuntu 18.04 provides further instructions.

Help Support this Website

If you find this information helpful, consider a contribution.

 

83 thoughts on “Running Windows 10 on Linux using KVM with VGA Passthrough”

  1. Hello Mathias,
    I have been traveling a lot lately, so forgive my late reply. In answer to your question: I don’t use a virsh windows10.xml file, just the script and qemu.
    Most other tutorials use libvirt and virt-manager to create and manage the VM, but I had not much luck with it.

    Heiko

  2. Thanks for the write-up! Still struggling through it on a Metabox (Clevo) P950ER

    I have Ubuntu 18.04 and have spent far too much time trying to disable the nouveau driver from loading.

    – kernel parameters tried:
    — modprobe.blacklist=nouveau
    — nouveau.blacklist=1
    – modprobe.d/* lines:
    — blacklist nouveau
    — options nouveau modeset=0
    — install nouveau /bin/true
    –alias nouveau off

    After all that I found there was a “nvidia-fallback.service” which tries to load nouveau if nvidia isn’t.
    One-off command to disable it:

    # systemctl disable nvidia-fallback.service

    cheers,
    Woody

  3. Hi guys,

    I just attempted this cool project with:
    CPU: intel core i5-6600k
    MEM: 16GB DDR4
    MB: AsRock Z170 Pro4
    Vid Primary: intel IGP
    Vid VM: EVGA Geforce GTX 1060
    HD’s: A couple spinning drives kicking around
    OS: Linux Mint 19

    I was able to get the script fully functional because I want to use an entire drive instead of an image, but for some reason using /dev/sdb1 wouldn’t work no matter how it was formatted, so I adapted your tutorial to libvirt and virt-manager and was able to get Windows installed with networking and a separate keyboard/mouse transferring over. The only issue I am having is that my graphics card is throwing up a Code 43 error in windows, but I’m not sure what to look for to solve the issue and get the Nvidia drivers to actually work. Any help you can give would be greatly appreciated!

    Thanks,

    Clayton

  4. Ahh this seems the right place, I think my previous post was in the wrong place. So I got windows installed in my VM. The issue I am currently having is i can get EITHER my Keyboard OR Mouse to passthru, but not both. Both devices are on Bus 002 and everytime I issue the -device command it seems to overwrite the previous -device command. How can I get them both to work at the same time? As always ANY response is greatly appreciated!

  5. @Brian:
    Make sure to follow part 3 and 4 by the letter. First identify the USB IDs of the keyboard and the mouse. They should be something similar to 045e:076c (the first part 045e is the vendor code, the second 076c is the device code). For two devices you need two such entries.
    You need to edit or create /etc/modprobe.d/local.conf and add something like:
    options vfio-pci ids=10de:13c2,10de:0fbb

    Change the IDs to reflect your own vendor and product IDs.

    Another way to pass through mouse and keyboard is by passing through a USB host controller. This is a PCI passthrough using the -device option in the qemu command. First you need to determine the USB controller:
    lspci | grep USB
    My output is:
    00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #2 (rev 05)
    00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #1 (rev 05)
    07:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    08:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    09:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    0f:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)

    To use the USB controller at PCI bus 08:00.0, you need to do the following:
    1. Edit the start script and add underneath the vmname entry:
    configfile=/etc/vfio-pci.cfg

    vfiobind() {
    dev=”$1″
    vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
    device=$(cat /sys/bus/pci/devices/$dev/device)
    if [ -e /sys/bus/pci/devices/$dev/driver ]; then
    echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
    fi
    echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
    }

    if ps -A | grep -q $vmname; then
    echo “$vmname is already running.”
    exit 1
    else
    cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
    vfiobind $line
    done

    In addition, you need to add the following line to the qemu command in the script (just underneath the graphics card device):
    -device vfio-pci,host=08:00.0 \

    Save the script file.
    Then you need to create the /etc/vfio-pci.cfg and enter the following:
    0000:08:00.0

    There must not be any blank line, or any comment, just one single line specifying the PCI bus of the USB controller.

    The USB controller must reside within its own IOMMU group. To check, use the following command:
    for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort –version-sort

    Hope this helps.

  6. I have been able to get Win10 guest working well on LM19 host, but not without some struggle…particularly with keyboard and mouse passthrough. After spending some time with the qemu 2.11.1 manpages (and Google), I came up with a solution which works well, doesn’t require explicit bus mapping, and allows me to switch back and forth between guest and host using normal KVM window release (CTRL+ALT).

    My setup is single monitor – I use CPU graphics for host, and passthrough my Nvidia card to guest – both are wired to my monitor, and once I have launched my VM and clicked inside the window to initiate mouse and kbd capture, I can then switch inputs on my monitor to run Win10 fullscreen from the discreet card output. This would of course work equally well on multiple monitor setup, of course. Here is how I got it working –

    As you noted in your post, QEMU syntax for USB device passthrough has changed since your original posting. The -device parameter is now used to define USB device passthrough, and I use the following syntax in my script file:

    -usb \
    -device usb-kbd \
    -device usb-mouse \

    I believe that code was added to QEMU which allows for keyboard and mouse detection without having to explicitly map the bus and device strings.

    Using these keyboard and mouse passthrough options and by removing the -vga none and -nographic parameters, I am able to switch back and forth between VM and host without a KVM/USB switch…just have to change the input on my monitor.

  7. I’m still struggling with my laptop (Metabox (Clevo) P950ER) to pass through the nvidia GPU. I think the bios is in the system bios so I’ll need to do custom ACPI table things to make it work which I’m not sure anybody has succeeded with a windows guest.
    There is another direction possible for those in my position: most of the intel CPUs with built in GPUs have virtualization features which can time slice the GPU( this is called GVM-g) and the parts necessary are in kernel 4.16 and qemu 2.12 https://www.phoronix.com/scan.php?page=news_item&px=Intel-vGPU-Linux-4.16-QEMU-2.12 which is what I’ll be trying next…

  8. @Chris:
    -usb \
    -device usb-kbd \
    -device usb-mouse \

    Thanks for your comment. I’ve actually tried your approach and it failed for me (using a Logitech wireless multi-device mouse and keyboard). However, with a regular mouse and keyboard your script should work just fine.

  9. @Pipandwoody:

    You most likely have a laptop using Nvidia “Optimus” technology. This can be tricky, though some people actually succeeded! Follow the links found here:
    https://heiko-sieger.info/running-windows-10-on-linux-using-kvm-with-vga-passthrough/#Dual-graphics_laptops_eg_Optimus_technology

    EDIT: If I remember correctly, you need to use the Seabios method, not the UEFI boot method I describe in this tutorial. There are instructions on how to do that in the links I provided.

    The vGPU support is indeed good news. HOWEVER, why would you choose the low-performance Intel IGD (integrated graphics device) inside the CPU over the more powerful discrete (Nvidia) GPU? Note that this vGPU technology only works with Intel now, not Nvidia. Nvidia has it’s own vGPU technology for its professional line of multi-user graphics cards (see https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html). But that has nothing to do with your laptop GPUs.

    Moreover, you would need to upgrade both your kernel (the current Linux Mint 19 / Ubuntu 18.04 kernel is 4.15) and qemu (2.11 as of now).

    Unless you have a good reason to try vGPU, I would try the other options I mentioned.

    1. I’m back on board now; in trying to get the Intel GVT-g stuff going figured out that I have a Coffee/Whisky/Cannon Lake processor which is not supported by the Intel linux driver and is unlikely to be in the future…

    2. Pipandwoody: “I’m back on board now; in trying to get the Intel GVT-g stuff going figured out that I have a Coffee/Whisky/Cannon Lake processor which is not supported by the Intel linux driver and is unlikely to be in the future…”

      That’s a bummer.

  10. Benchmark results – Win10/8GB/SSD/GTX1070/6770K – no overclock

    UserBenchmarks:
    CPU: Intel Core i7-6700K – 98.6%
    GPU: Nvidia GTX 1070 – 103.4%
    SSD: Red Hat VirtIO 161GB – 147.8%
    RAM: QEMU 1x8GB – 97.8%
    MBD: QEMU Standard PC (Q35 + ICH9, 2009)

    Passmark:
    Rating -5360.2
    CPU-12140.1
    G2D-705.0
    G3D-12496.3
    Mem-2994.6
    Disk-10499.7

    Overall I am pleased with the performance…just need to eliminate the sound crackling issue and can call it done.

    1. That’s great news! Performance looks good, too.

      Regarding sound crackling: I use the following in my script:

      export QEMU_AUDIO_DRV=alsa
      export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256
      export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256
      export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
      export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16
      export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1
      export QEMU_AUDIO_TIMER_PERIOD=50

      Then in the qemu section:
      -soundhw hda

      See also my tuning section https://heiko-sieger.info/running-windows-10-on-linux-using-kvm-with-vga-passthrough/#Turn_on_MSI_Message_Signaled_Interrupts_in_your_VM
      and turn on MSI in the Windows guest. I think that should do it.

  11. Please note I updated my start script to improve the VM detection. The new code is:
    if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
    echo “VM running”
    else
    echo “Start the VM”

    fi

    I wanted to see if ANY passthrough VM is running, as I have now two different passthrough VMs (Windows 10 and Linux Mint 19) that obviously cannot run at the same time.

  12. I followed instructions for creating a network bridge by adding br0 to the /network/interfaces file if you know what I mean. When I get to creating the VM with your script should I be changing net0 to br0? Wasn’t quite sure how that works. to set that line up in your script for the network.

    1. @David A:

      My command line is:
      -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
      -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

      Here is the output of brctl show, with no VM running:
      brctl show
      bridge name bridge id STP enabled interfaces
      bridge0 8000.c860003147df yes eno1
      virbr0 8000.52540004e1c8 yes virbr0-nic

      As you can see, you don’t use the name of the bridge you created in the qemu script.

  13. Also extremely extremely important problem in the guide!!!!

    Windows will then ask you to:
    Select the driver to install

    Click “Browse”, then select your VFIO ISO image and go to “vioscsi“, open and select your Windows version (w10 for Windows 1o), then select the “AMD64” version for 64 bit systems, click OK.

    Vioscsi should be viostor….

    When navigating the iso the correct driver is in the viostor folder

    1. I fixed the tutorial.

      In practice, I haven’t found any significant performance difference between the two drivers. And I still need to get the hang of the new qemu syntax, but haven’t got the time to play around with it.

  14. Hey there,

    Sadly I was not yet able to get a vm to work. I was able to follow along until part 10, but there the trouble started.

    The “-cpu host,kvm=off \” part gets me “QEMU 2.11.1 monitor – type ‘help’ for more information” in the Terminal.
    Without the -cpu part, a QEMU windows opens; with “Boot failed: could not read from CDROM (code 0003)” as an error. This happens with booth a Windows 10 and ubuntu 18.04 iso file.
    After closing the QEMU window the following messages appear in the terminal:
    “-smp: command not found”
    “-device: command not found” which is my GTX 970.

    I hope you can point me in the right direction. I really want to make this work.

    Regards SaBe3.1415

    The script:
    #!/bin/bash

    vmname=”windows10vm”

    if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
    echo “A passthrough VM is already running.” &
    exit 1

    else

    # use pulseaudio
    export QEMU_AUDIO_DRV=pa
    export QEMU_PA_SAMPLES=8192
    export QEMU_AUDIO_TIMER_PERIOD=99
    export QEMU_PA_SERVER=/run/user/1000/pulse/native

    cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
    qemu-system-x86_64 \
    -name $vmname,process=$vmname \
    -machine type=q35,accel=kvm \
    -cpu host,kvm=off \
    -smp 6,sockets=1,cores=3,threads=2 \
    -m 8G \
    -balloon none \
    -rtc clock=host,base=localtime \
    -vga none \
    -nographic \
    -serial none \
    -parallel none \
    -soundhw hda \
    #-usb-host,vendorid=1b1c,productid=1b2d \
    #-usb-host,hostbus=001,hostaddr=003 \
    #-usb-host,vendorid=214e,productid=0005 \
    #-usb-host,hostbus=001,hostaddr=005 \
    -device vfio-pci,host=01:00.0,multifunction=on \
    -device vfio-pci,host=01:00.1 \
    -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
    -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
    -boot order=dc \
    -drive id=disk0,if=virtio,cache=none,format=raw,file=/home/toru/Downloads/Win.img \
    -drive file=/home/toru/Downloads/ubuntu-18.04.1-desktop-amd64.iso,index=1,media=cdrom \
    -drive file=/home/toru/Downloads/virtio-win-0.1.160.iso,index=2,media=cdrom \
    -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
    -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

    exit 0
    fi

    Hardware:
    Motherboard: ASUS ROG MAXIMUS VIII HERO
    OS: Linux Mint 19 Cinnamon version 3.8.9
    Linux Kernel: 4.15.0-34-generic
    CPU: Intel i7 6700k
    RAM: 16GB
    GPU for the vm: GeForce GTX 970
    GPU for Mint : GeForce 210

    1. Remove the #-usb-host,vendorid=1b1c,productid=1b2d lines from your qemu command. If I remember correctly the qemu command doesn’t allow comments within the command string.

      You cannot use the virtio Windows drivers for Ubuntu.

      It happened to me too that I got the Qemu monitor and needed to continue by selecting the image to boot from. I’m traveling and cannot check the commands or sequence, but I hope this gets you further. Make sure your GTX970 has a UEFI BIOS and that it is within its own IOMMU group. There is a separate post on that.

  15. Heiko, thank your for an amazing guide.
    I have some troubles with the VGA configuration, an on-board Intel VGA chip and a PCI MSI Geforce adapter. I dont have the exact specs here right now, so I wonder if you maybe can help me with troubleshooting once I am back home with the computer in front of me?
    Best regards
    Anders/Sweden

    1. Yeah, I would need some more info. As an advice, start with checking that your graphics card has an UEFI BIOS (if its an older one, it might not). Then you need to make sure that your PC boots using the internal graphics – check your PC BIOS settings.

    1. Your Nvidia GPU is in iommu group 1:
      /sys/kernel/iommu_groups/1/devices/0000:01:00.1
      /sys/kernel/iommu_groups/1/devices/0000:00:01.0
      /sys/kernel/iommu_groups/1/devices/0000:01:00.0

      You need to pass through ALL devices except the PCI bridge (00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller), that is you need to attach both 0000:01:00.0 and 0000:01:00.1 to vfio-pci. Your output shows that you passed only the graphics part 01:00.0 through, not the audio part of your graphics card (01:00.1):
      root@ws-mint:~# lspci -kn|grep -A 2 01:00

      01:00.0 0300: 10de:1184 (rev a1)
      Subsystem: 1462:2825
      Kernel driver in use: vfio-pci

      01:00.1 0403: 10de:0e0a (rev a1)
      Subsystem: 1462:2825
      Kernel driver in use: snd_hda_intel

      The Kernel driver in use should be vfio-pci. See step 4 in my tutorial:

      Open or create /etc/modprobe.d/local.conf:
      xed admin:///etc/modprobe.d/local.conf
      and insert the following:
      options vfio-pci ids=10de:1184,10de:0e0a
      These are the PCI IDs for your graphics card’ VGA and audio part.

      Reboot and check with with: lspci -kn | grep -A 2 01:00

      In your start script, replace:
      -usb \
      -device usb-host,hostbus=1,hostaddr=1 \

      with:
      -usb -usb-host,vendorid=062a,productid=4101 \
      and remove the network configuration (-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01)! Qemu will provide a routed network interface and you should have Internet access from the VM.

      In the script, replace the graphics card passthrough part with the following:
      -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
      -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
      -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \

      Please share your /etc/default/grub configuration file!

      Your /etc/default/grub file should contain a line similar to this:
      GRUB_CMDLINE_LINUX_DEFAULT="modprobe.blacklist=nouveau intel_iommu=on iommu=pt"
      iommu=pt is optional.

      If you change /etc/default/grub you need to run sudo update-grub

      Your complete qemu command should be:
      qemu-system-x86_64 \
      -name $vmname,process=$vmname \
      -machine type=q35,accel=kvm \
      -cpu host,kvm=off \
      -smp 1,sockets=1,cores=1,threads=1 \
      -m 8G \
      -balloon none \
      -rtc clock=host,base=localtime \
      -vga none \
      -nographic \
      -serial none \
      -parallel none \
      -soundhw hda \
      -usb -usb-host,vendorid=062a,productid=4101 \
      -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
      -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
      -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \
      -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
      -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
      -boot order=dc \
      -drive id=disk0,if=virtio,cache=none,format=raw,file=/data/VM/win10direct.img \
      -drive file=/data/ISO/Windows-10-Pro-x64.iso,index=1,media=cdrom \
      -drive file=/data/VM/virtio-win-0.1.160.iso,index=2,media=cdrom

      Important: Check my qemu command – I do make mistakes sometimes!

      Note: The -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 command creates a virtual PCIe bridge to which the graphics cards video and audio part are attached. Sometimes this solves issues with black screen or AMD graphics cards / drivers. Hope this helps.

      Note 2: Giving only 1 thread to the VM is a little low. Try -smp 2,sockets=1,cores=1,threads=2

  16. First, the USB parameters seems to be a problem

    anders@ws-mint:/data/VM$ sudo ./runwin10direct.sh
    [sudo] password for anders:
    qemu-system-x86_64: -usb-host,vendorid=062a,productid=4101: invalid option

    Extract from the run script
    -parallel none \
    -soundhw hda \
    -usb -usb-host,vendorid=062a,productid=4101 \
    -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
    -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \

    So after some googling, I changed the runscript to

    -usb \
    -device usb-host,vendorid=0x062a,productid=0x4101 \

    the script start to run, and it seemed to grab both the keyboard and mouse but the second screen remained blank. Nothing displayed at all.

    The grub file looks like this now

    GRUB_DEFAULT=0
    GRUB_TIMEOUT_STYLE=hidden
    GRUB_TIMEOUT=10
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT=”splash intel_iommu=on modprobe.blacklist=nouveau iommu=pt”
    GRUB_CMDLINE_LINUX=””

    anders@ws-mint:~$ lspci -kn | grep -A 2 01.00
    01:00.0 0300: 10de:1184 (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    01:00.1 0403: 10de:0e0a (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    So, we are getting closer, but there is obviously something more to tune. Thank you for your patience, you are a rock!

  17. Heiko, here is the latest status

    First, the USB parameters seems to be a problem

    anders@ws-mint:/data/VM$ sudo ./runwin10direct.sh
    [sudo] password for anders:
    qemu-system-x86_64: -usb-host,vendorid=062a,productid=4101: invalid option

    Extract from the run script
    -parallel none \
    -soundhw hda \
    -usb -usb-host,vendorid=062a,productid=4101 \
    -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
    -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \

    So after some googling, I changed the runscript to

    -usb \
    -device usb-host,vendorid=0x062a,productid=0x4101 \

    the script start to run, and it seemed to grab both the keyboard and mouse but the second screen remained blank. Nothing displayed at all.

    The grub file looks like this now

    GRUB_DEFAULT=0
    GRUB_TIMEOUT_STYLE=hidden
    GRUB_TIMEOUT=10
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT=”splash intel_iommu=on modprobe.blacklist=nouveau iommu=pt”
    GRUB_CMDLINE_LINUX=””

    anders@ws-mint:~$ lspci -kn | grep -A 2 01.00
    01:00.0 0300: 10de:1184 (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    01:00.1 0403: 10de:0e0a (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    So, we are getting closer, but there is obviously something more to tune. Thank you for your patience, you are a rock!

    1. OK, so the the GPU (video and audio) is now bound to vfio-pci. USB mouse/keyboard seems also fixed (thanks for the correction). The problem could be any of the following:
      1. Wrong path to OVMF file or variables – check my tutorial and make sure you followed it by the letter.

      2. Check that the path to the Windows ISO is correct, as well as that to the driver ISO.
      3. Make sure your second monitor is connected properly to the Nvidia card! If the monitor has different inputs, make sure to select the right input.
      3. Remove -vga none \ from the script. This can be helpful for debugging.

  18. Hi
    OVMF files and path checked. They exists and are accessible.
    Windows ISO checked (used them for other VM’s with KVM w/o any issues)
    Same with the virtio io image.

    The monitor is connected to the HDMI port. THere are also Displayport ports on the adapter but not used. I did remove the “blacklist” option and rebooted. By doing so I had a dual screen setup that worked fine, so the cable, monitor and port should be fine.

    The installation seems to run along (see screenshot in the Google Doc file), but without any output at all it may be hard to pin down the exact problem. Is there a logfile option that could be used? I cant find anything in the official QEMU documetnation.

  19. I added a second keyboard and changed the USB vendor and product ids on order to keep the main keyboard attached to the host.
    I then ran an strace -P pid on the running vm, and got the following output (dont know if this is of any use, but maybe there is something there that can help). I then used the main keyboard to abort with ctrl/ (chasing shadows 🙂 )

    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=997967}, NULL, 8) = 0 (Timeout)
    futex(0x55e9c47cafc0, FUTEX_WAKE_PRIVATE, 1) = 1
    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=998196}, NULL, 8) = 0 (Timeout)
    futex(0x55e9c47cafc0, FUTEX_WAKE_PRIVATE, 1) = 1
    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=998109}, NULL, 8) = 0 (Timeout)
    futex(0x55e9c47cafc0, FUTEX_WAKE_PRIVATE, 1) = 1
    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=997856}, NULL, 8) = ? ERESTARTNOHAND (To be restarted if no handler)
    — SIGINT {si_signo=SIGINT, si_code=SI_KERNEL} —
    write(7, “\1\0\0\0\0\0\0\0”, 8) = 8
    rt_sigreturn({mask=[BUS USR1 ALRM IO]}) = -1 EINTR (Interrupted system call)

    1. Sorry, the trace doesn’t help (me).

      In order to verify the qemu script I posted above, I installed a new Windows VM using the following script:
      #!/bin/bash

      configfile=/etc/vfio-pci.cfg
      vmname="win10-2vm"

      vfiobind() {
      dev="$1"
      vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
      device=$(cat /sys/bus/pci/devices/$dev/device)
      if [ -e /sys/bus/pci/devices/$dev/driver ]; then
      echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
      fi
      echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
      }

      if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
      zenity --info --window-icon=info --timeout=15 --text="A VM is already running." &
      exit 1

      else

      cat $configfile | while read line;do
      echo $line | grep ^# >/dev/null 2>&1 && continue
      vfiobind $line
      done

      export QEMU_AUDIO_DRV=alsa
      export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256
      export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256
      export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
      export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16
      export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1
      export QEMU_AUDIO_TIMER_PERIOD=50

      cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
      chown user:kvm /tmp/my_vars.fd

      qemu-system-x86_64 \
      -enable-kvm \
      -runas user \
      -monitor stdio \
      -serial none \
      -parallel none \
      -nodefaults \
      -nodefconfig \
      -name $vmname,process=$vmname \
      -machine q35,accel=kvm,kernel_irqchip=on \
      -cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
      -smp 6,sockets=1,cores=3,threads=2 \
      -m 8G \
      -balloon none \
      -rtc base=localtime,clock=host \
      -soundhw hda \
      -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
      -object iothread,id=io1 \
      -device virtio-scsi-pci,id=scsi0,ioeventfd=on,iothread=io1,num_queues=4,bus=pcie.0 \
      -drive id=disk0,file=/dev/lm13/win10-2,if=none,format=raw,aio=native,cache=none,cache.direct=on,discard=unmap,detect-zeroes=unmap \
      -device scsi-hd,drive=disk0 \
      -drive id=isocd,file=/media/heiko/tmp_stripe/OS-backup/ISOs/win10.iso,format=raw,if=none -device scsi-cd,drive=isocd \
      -drive id=virtiocd,file=/media/heiko/tmp_stripe/OS-backup/ISOs/virtio.iso,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd \
      -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on \
      -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
      -device vfio-pci,host=00:1a.0,bus=root.1,addr=00.2 \
      -device vfio-pci,host=08:00.0,bus=root.1,addr=00.3 \
      -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
      -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
      -boot order=dc \
      -netdev tap,vhost=on,ifname=vmtap1,id=net0 \
      -device virtio-net-pci,mac=00:16:3e:00:01:01,netdev=net0

      #EOF
      exit 0
      fi

      I got an error when using the x-vga option for the graphics card, so I removed it.

      When starting the script a prompt to press Enter to continue CD/DVD installation appeared quickly, followed by PXE… prompt. The first time around I missed the chance to press Enter, so I had to kill the VM. The second time I just hit repeatedly Enter and I got the installation prompt.

  20. I removed the -vga none option, no difference. Removed the -nographic option too and then a new windows appeared on the primary screen with PXE boot and a bios shell. Could not work out what to do from there. I will continue to read up on other howtos during next week to get a better understanding of how Qemu works. If you can think of anything else, please let me know. If not, I guess that I have to capitulate :).
    Best regards /Anders

    1. I had the same!

      I don’t know why but the CD/DVD prompt disappears so fast that you may not see it on screen, instead you get the PXE prompt I also got. I just kept hitting Enter a few times after starting the VM. If you have a second keyboard, pass it through via USB and hit Enter on that keyboard. I know it’s a pain but it should work. Once Windows is installed you remove the ISO images and all will be fine.

      Another possibility: Make sure you configured your ISO images properly.

  21. Just saw your new screenshots – you got to the boot manager. Select the first UEFI QEMU DVD-ROM, if that doesn’t work, restart the VM and select the other. Once you selected the correct Windows ISO, the Windows installer should start and prompt you to load a driver. Use the virtio-stor driver (unless you configured your disk as a SCSI drive) and select the amd64 for a 64 bit OS.

    1. If I remember correctly, you need to specify 0x… before the vendor and product ID. There is a comment above somewhere that points that out.
      I need to recheck my tutorial on that, haven’t had the time yet.

    2. Thank you very much!

      Unfortunately I get the following errors:
      “(qemu) qemu-system-x86_64: AMD CPU doesn’t support hyperthreading. Please configure -smp options properly”
      Thankfully that one doesn’t prevent me from entering the VM, however Windows greets me with this error:
      “SYSTEM THREAD EXCEPTION NOT HANDLED”

      I have heard it is possible to get Windows to boot if I change the “host” in “-cpu host” to my cpu model, but I cannot figure out how to get qemu to produce a list of cpu models. Anyways, thank you for your guide, it helped me out a lot, although I might have to switch to Arch for more recent packages.

  22. You need to change your qemu script to match your CPU topology, such as:
    -machine q35,accel=kvm \
    -cpu host,kvm=off \
    -smp 4,sockets=1,cores=4,threads=1 \

    Above is for an Nvidia GPU passthrough.

    If that doesn’t work, and you don’t mind reinstalling Windows, you can try change the following line to:
    -machine pc,accel=kvm \

    If you prefer Arch Linux, there are guides out there and their documentation is great. If you prefer Ubuntu or similar, I’m sure there is a way to make it work.

    Regarding host, try in a terminal the following:
    qemu-system-x86_64 -cpu help
    to get the options.

    Can you share your hardware specs and the qemu command?

    1. Actually I haven’t been able to install Windows 10 due to the “system thread exception not handled” error.

      Anyways, my hardware:
      My cpu: Ryzen 5 1600X
      host gpu: GeForce GT 710
      guest gpu: Radeon RX 570
      Memory: 15.7 GiB

      And do you mean the window10vm.sh file? Right now, it looks like this:
      #!/bin/bash

      vmname=”windows10vm”

      if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
      echo “A passthrough VM is already running.” &
      exit 1

      else

      # use pulseaudio
      export QEMU_AUDIO_DRV=pa
      export QEMU_PA_SAMPLES=8192
      export QEMU_AUDIO_TIMER_PERIOD=99
      export QEMU_PA_SERVER=/run/user/1000/pulse/native

      cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd

      qemu-system-x86_64 \
      -name $vmname,process=$vmname \
      -machine pc,accel=kvm \
      -cpu host \
      -smp 4,sockets=1,cores=4,threads=1 \
      -m 8G \
      -balloon none \
      -rtc clock=host,base=localtime \
      -vga none \
      -nographic \
      -serial none \
      -parallel none \
      -soundhw hda \
      -usb \
      -device usb-host,vendorid=0x093a,productid=0x2521 \
      -device usb-host,vendorid=0x04f2,productid=0x1112 \
      -device vfio-pci,host=09:00.0,multifunction=on \
      -device vfio-pci,host=09:00.1 \
      -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
      -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
      -boot order=dc \
      -drive id=disk0,if=virtio,cache=none,format=raw,file=/media/burger/win.img \
      -drive file=/home/burger/Downloads/Win10_1809Oct_English_x64.iso,index=1,media=cdrom \
      -drive file=/home/burger/Downloads/virtio-win-0.1.160.iso,index=2,media=cdrom \
      #-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
      #-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

      exit 0
      fi

    1. I was able to install Windows 10 by doing the following:
      sudo nano /etc/modprobe.d/kvm.conf
      add “options kvm ignore_msrs=1”
      reboot

      Later, changing “-machine q35” to “-machine pc” stopped it from randomly crashing.

    2. Glad you resolved the issue! Usually changing q35 to pc will prevent Windows from starting, but in your case I guess you hadn’t installed Windows yet. Please confirm.

  23. Hello Heiko,

    thanks for the great working how to for linux mint, to get Windows 10 working with the Hardware in the Computer without any problems.

    I never thought, that this could actually really working, but it does, i made some mistake with my first try, cause i haven’t had in mind, that Windows 7 uses an older Kernel then Windows 10.

    So my Problem was, that Windows 7 get’s stucked, with the bootup screen saying Windows is starting and the Windows 7 Logo, but i recognized the desktop sound, when Windows arrives @Loginscreen, so i don’t know if that’s just an driver issue within the graphicsdriver or another one.

    Leaving my Specs for invastigations, if you have time to.

    CPU Intel Core I5-8600k
    Nvidia Geforce GTX 1060 3gb Version
    8 Gig’s of Memory
    Toshiba 250 GB SSD
    2*Toshiba 1 TB HDD’s
    MSI Z370 Gaming Plus Motherboard.

    Greetings Mike.

    1. Sorry could not edit the last comment, so here are my results of Winsat formal command, you have asked for.

      > CPU – LZW-Komprimierung 731.10 MB/s
      > CPU – AES256-Verschlüsselung 3361.10 MB/s
      > CPU – Vista-Komprimierung 1864.86 MB/s
      > CPU – SHA1-Hash 2324.24 MB/s
      > Uniproc-CPU LZW-Komprimierung 186.79 MB/s
      > Uniproc-CPU AES256-Verschlüsselung 860.54 MB/s
      > Uniproc-CPU Vista-Komprimierung 485.79 MB/s
      > Uniproc-CPU SHA1-Hash 642.36 MB/s
      > Arbeitsspeicherleistung 27263.03 MB/s
      > Direct3D Batch-Leistung 42.00 F/s
      > Direct3D Alpha Blend-Leistung 42.00 F/s
      > Direct3D ALU-Leistung 42.00 F/s
      > Direct3D Texture Load-Leistung 42.00 F/s
      > Direct3D Batch-Leistung 42.00 F/s
      > Direct3D Alpha Blend-Leistung 42.00 F/s
      > Direct3D ALU-Leistung 42.00 F/s
      > Direct3D Texture Load-Leistung 42.00 F/s
      > Direct3D-Geometrieleistung 42.00 F/s
      > Direct3D-Geometrieleistung 42.00 F/s
      > Leistung des Direct3D-Konstantenpuffers 42.00 F/s
      > Durchsatz des Videospeichers 89172.60 MB/s
      > Dshow-Videocodierzeit 0.00000 s
      > Dshow-Videodecodierzeit 0.00000 s
      > Media Foundation-Decodierzeit 0.00000 s
      > Disk Sequential 64.0 Read 82.56 MB/s 6.2
      > Disk Random 16.0 Read 1.43 MB/s 3.9
      > Gesamtausführungszeit 00:01:54.69

    2. The Windows 7 installer does not support UEFI. There is a way to upgrade the Win 7 ISO to a version that supports UEFI, but that is rather a pain in the neck.
      When you try to UEFI boot the Win7 ISO, it gets stuck.
      It’s rather easy to change the VM start script to support legacy (or SeaBIOS) boot, but there is a rats tail of issues that can arouse with using legacy boot. One issue is VGA arbitration that afflicts Intel processors with integrated GPU (like your CPU), but only when booting the VM in legacy mode.

      I trust from your post that installing the Windows 10 VM was smooth.

      Also thanks for the Windows 10 winsat benchmark. If you want, please run the Userbenchmark https://www.userbenchmark.com/ and post the results.

      Vielen Dank! Frohes Neues Jahr!

  24. Thanks for your amazing tutorial !
    I finally managed to get Linux and Win 10 working without dual-boot.

    Quick question, at Linux Mint boot, my monitor connected to the GPU used for passthrough isn’t detect in Display Settings and i have only one display , the one connected to my IGPU…
    But inxi give me this :
    $ inxi -Gxxx
    Graphics: Device-1: Intel HD Graphics 530 vendor: ASUSTeK driver: i915 v: kernel bus ID: 00:02.0 chip ID: 8086:1912
    Device-2: NVIDIA GP106 [GeForce GTX 1060 6GB] vendor: Gigabyte driver: vfio-pci v: 0.2 bus ID: 01:00.0
    chip ID: 10de:1c03
    Display: x11 server: X.Org 1.19.6 driver: modesetting unloaded: fbdev,vesa resolution: 1920×1080~60Hz
    OpenGL: renderer: Mesa DRI Intel HD Graphics 530 (Skylake GT2) v: 4.5 Mesa 18.0.5 compat-v: 3.0 direct render: Yes

    Is it normal ?
    If yes, do i have a way to still use my GPU while not using VM ?

    Thanks

    1. In my tutorial the passthrough GPU (GTX 1060 in your case) is bound to the vfio-pci driver when the host boots. This prevents it from being initialized and used by the Linux host. As a result, Display Settings will not detect your passthrough GTX 1060 card, nor the monitor connected to it. This is the way it should be.

      There are ways to make your passthrough GPU accessible to the host – it’s called single GPU passthrough. I have placed a link in my tutorial to Yuri’s instructions and his script that can handle that. It is somewhat experimental.

      BUT: If you need the GTX 1060 under Linux for games etc., there is another way – create a Linux Virtual Gaming Machine, see https://heiko-sieger.info/linux-virtual-gaming-machine/

      In a nutshell: Use your Linux host for day to day productivity tasks. Run either a Windows VM or Linux VM for graphics intensive applications like games.

      The beauty about running a VM is that you can easily backup and restore the VM. So when you experiment with new applications or settings, make a backup first and then do whatever you want – in the worst case you can restore the original VM.

      Does that answer your question?

      (On the matter of displays and GPUs, see also https://heiko-sieger.info/developments-in-virtualization/ and https://looking-glass.hostfission.com/.)

  25. Thanks for your reply.
    Found a solution, bought an used 3rd monitor which I hooked to my GPU so it’s dedicated to Win10 VM.
    Since my IGPU support multi-monitor i can use the other 2 on Linux.

    Now I’m trying to pass one of my sata disk to Win10 VM but didn’t succeed.
    It’s not used by Linux it’s a hard drive in NTFS format.
    I tried these commands :
    -drive id=disk1,format=raw,file=/dev/disk/by-uuid/cb4e2f1d-537f-4aa5-a7bf-fb67c6fe99a3
    -drive file=/dev/disk/by-uuid/01D47BE92A88C5B0,media=disk

    Win10 did boot and start correctly but the drive didn’t show.

    I tried looking in Qemu documentation but didn’t found the solution.

    Have any idea ?

    Thanks

    1. See my post here: https://heiko-sieger.info/tuning-vm-disk-performance/

      In your specific case, this would be:
      -object iothread,id=io1 \
      -device virtio-blk-pci,drive=disk1,iothread=io1 \
      -drive if=none,id=disk1,cache=none,format=raw,aio=native,file=/dev/disk/by-uuid/your-uuid \

      To check that you specified the drive correctly, try to mount it in Linux before your run the VM. Then unmount before your start the VM.

      It’s best to have the disk not formatted and let Windows handle that.

  26. I did try that, Windows 10 see the partition as a not formatted drive and I have to create one.
    After format and assigning a letter to the drive in Win10 VM, i can use it and copy to it.
    Back to Linux, when mounting the same drive, nothing in it.

    Maybe a noob question, but is it a sort of image of my drive and I have to mount it on Linux to see the content ?

    1. You should be able to mount it with kpartx (you may need to install it).

      See if it finds the device map:
      kpartx -l /dev/your-drive

      The result would be something like:
      your-drive1 : 0 262144 /dev/your-drive 34
      your-drive2 : 0 3774605312 /dev/your-drive 264192

      You can then create a device map and r/w mount it like in a shell script (or run the commands individually in a terminal):
      #!/bin/sh

      kpartx -av /dev/your-drive
      sleep 1
      mount -t ntfs -o rw,nls=utf8,umask=000,dmask=027,fmask=137,uid=1000,gid=1000,windows_names /dev/your-drive2 /mnt/your-mount-point

      Notice the “2” at the end of the /dev/your-drive2, because you likely need to mount that partition.

      To unmount, use a script like:
      #!/bin/sh

      umount /mnt/your-mount-point
      sleep 2
      kpartx -dv /dev/your-drive

      Don’t forget to remove the device using kpartx -d …!

      For more on kpartx, see https://www.systutorials.com/docs/linux/man/8-kpartx/

      Note: DON’T READ-WRITE MOUNT the image while Windows is running !!! Use ro (read only)!

  27. Hi,
    I have a problem that I find a little odd.
    I have been able to follow your guide and boot the VM to the point where Tiano shows up and then shell pops up. When I exited the shell and checked Boot manager like you suggested in Part11 I found only “EFI internal shell” option available.
    Also when shell first shows up there is a line saying “map: No mapping found”
    Can you think of any reason I would have this issue?
    I will add that I had pass-through set up and working really well in LM18.3 but then I decided that LM19.1 seems like a good idea and I’m starting to think I was wrong.

    Below is my current script that I tried to modify to the best of my ability to even get as far as I did.

    ——————————————————————————–
    #!/bin/bash

    vmname=”VMforWin10″

    if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
    echo “VMforWin10 is already running.” &
    exit 1

    else

    echo “Windows 10 virtual machine with gpu-passthrough.”

    export QEMU_AUDIO_DRV=alsa
    export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256
    export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256
    export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
    export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16
    export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1
    export QEMU_AUDIO_TIMER_PERIOD=50

    cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
    chown divo:kvm /tmp/my_vars.fd
    qemu-system-x86_64 \
    -runas divo \
    -enable-kvm \
    -monitor stdio \
    -serial none \
    -parallel none \
    -nodefaults \
    -nodefconfig \
    -name $vmname,process=$vmname \
    -machine q35,accel=kvm,kernel_irqchip=on \
    -cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
    -smp 6,sockets=1,cores=3,threads=2 \
    -m 16G \
    -balloon none \
    -rtc clock=host,base=localtime \
    -vga none \
    -nographic \
    -serial none \
    -parallel none \
    -soundhw hda \
    -usb \
    -device usb-host,vendorid=0x1e7d,productid=0x3214 \
    -device usb-host,vendorid=0x046d,productid=0xc52b \
    -device vfio-pci,host=01:00.0,multifunction=on \
    -device vfio-pci,host=01:00.1 \
    -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
    -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
    -boot order=dc
    -drive id=disk0,if=virtio,cache=none,format=raw,file=/home/divo/VMData/VMWin10/VMWin10.img \
    -drive id=disk1,if=virtio,cache=none,format=raw,file=/home/divo/VMData/VMWin10/UsersDrive.img \
    -drive id=disk2,if=virtio,cache=none,format=raw,file=/media/divo/TOSHIBA\ EXT/VMDrives/GameDrive.raw \
    -drive file=/home/divo/VMData/VMWin10/Windows.iso,index=1,media=cdrom \
    -drive file=/home/divo/VMData/VMWin10/virtio.iso,index=2,media=cdrom

    exit 0
    fi

    1. I apologize, please ignore me, I missed the “\” at the -boot order=dc line.
      Now I just need to make network work for me.

  28. Me again,
    So I was trying and googling for few hours and I can’t figure out why I can’t make a network working in my VM.
    If I run my script above the VM starts and everything seems to work fine just without any network available in guest OS.
    I set up a bridge like you explained and all seems fine on host OS but when I add the two lines below
    -netdev tap,id=net0,ifname=vmtap1,vhost=on \
    -device virtio-net-pci,mac=00:16:3e:00:01:01,netdev=net0 \
    and run the script I get an error below and VM crushes.

    QEMU 2.11.1 monitor – type ‘help’ for more information
    (qemu) qemu-system-x86_64: VFIO_MAP_DMA: -12
    qemu-system-x86_64: vfio_dma_map(0x55abecf95c60, 0x91000000, 0x80000, 0x7f0802000000) = -12 (Cannot allocate memory)
    qemu: hardware error: vfio: DMA mapping failed, unable to continue
    CPU #0:
    RAX=0000000000000001 RBX=0000000080020004 RCX=0000000000020004 RDX=0000000080000000
    RSI=0000000000000002 RDI=0000000000000001 RBP=0000000000020004 RSP=000000007feba2e0
    R8 =0000000000000000 R9 =000000007feba3ff R10=0000000000000001 R11=0000000000000000
    … and so on.

    I’ve found some post regarding memory limits but I have no idea how it all works,
    If you could at least point me in the right direction it would be grateful.
    Thanks.

    1. If it works as root, it’s a permissions problem. Check that your hugetables permissions are OK. Add your user name to the kvm group.

      In the /etc/fstab file check to see that you have an entry like:
      hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0

      where gid=129 is the group id of your kvm group.

      See also the other instructions regarding hugetables.

  29. This might be slightly out of context, but I have tried every place I know of, so I thought maybe someone in here could help.
    I have a windows 10 vm running fine (with libvirt and virt-viewer though), my problem happens, when I add an extra screen, the mouse starts to act weird. Whenever I click, it jumps to the top left corner of the screen furthest to the left. Most of the time it’s also really laggy and has small jumps, just moving it around.
    Have anyone succeeded in running a Windows 10 guest with two monitors?

  30. Can I adapt this guide to Manjaro XFCE? Anyone know of a guide specific to Manjaro please? My system is Ryzen 5 2600X on a B350 board that has IOMMU enabled in the BIOS (ASRock AB350 Pr04), 256GB Samsung 960 Pro, 16GB DDR4 and Radeon R9 290. QEMU version is 3.1.

    1. @Rod: You should be able to use this guide for Manjaro, although you will have to adapt it a little to reflect the Arch Linux syntax. Any guide or instructions that are based on Arch Linux should help you accomplish the job. I have included links under references that refer to the excellent Arch Linux documentation, as well as to the forum.

      I myself run a Manjaro VM on Linux Mint. Time permitting, I hope to be able to install Manjaro on a i3 and run a Windows VM with pass-through. Manjaro is an up-to-date, rolling distribution (some call it “bleeding edge”), whereas Linux Mint is very conservative, even more than Ubuntu. If you are looking for the latest and greatest, try Manjaro. For me Linux Mint has so far been a good companion.

  31. @Heiko Sieger,

    Thanks for the reply. I thought I had possibly offended you with my comment somehow so I went looking for articles and found many on Arch and pass through. I was on Linux Mint MATE 19.1, but I didn’t like being on the much older builds of QEMU, and even the PPA only brings it up to 2.12. I tried to build qemu 3.1 myself but something went wrong. Although I have been experimenting with Linux since its early days way back in the mid 90’s I have always been a trained Windows IT guy and enthusiast and never was a guru with Linux. I can do anything on Windows naturally but with Linux I am sadly still a copy/paste follow guide user and I hate it. When something doesn’t work I almost instantly lose interest and move on, that’s my real issue right there. Thankfully I have always kept an up to date dual boot setup for myself but have learned to leave Linux alone as much as possible (which is why I like Manjaro as it keeps things up to date by default, lol). I have ran just about every Linux Distro you can think of, but it has always been out of curiosity trying to find that one BIG WOW moment I suppose. I don’t know why but LM hasn’t always been the best distro to me on my older computer (X58 + Xeon X5650 + Radeon 5870). There would always be a crash, boot issue or LM would flat out refuse to install. But in 2018 I finally, yes after nearly a decade retired my X58 and got a Ryzen 5 2600X (I had been gifted a AM4 motherboard six months prior so this helped push me). Although the x58 was a massive jump in computing power all those years ago it was finally showing its age at many levels. I think I adopted that x58 in 2009, so that seems like a record to me, haha.

    Anyway, what brings me here? Well a few weeks ago Linus Tech Tips released a video (on April 18 2019 titled “Apple won’t like this… – Run MacOS on ANY PC”) using GPU Pass Through and Manjaro as the host with macOS High Sierra as the guest with the end result being near bare metal performance, something I had never heard of before and it got me thinking about doing the same thing with Windows 10. So, here I am trying to figure out if this is worth it or not and so far from what I have read it surely seems worth trying. Your intro above seems like you have had LOTs of experience with this sort of thing, and who best to follow then someone like you with lots of experience doing this very thing? I have now read everything I could about it and still have not had the opportunity to give this a go. I have even re-arranged my partitions giving Windows less and Linux more, lol. But I have one serious road block in front of me, the reason I got this board in the first place is because the second PCIe slot was shattered (its an x16 physical slot but with only x4 pins present), so it was gifted to me (by my uncle) so I could use it for a Home server, but I instead have been using it as my main system. I’m a Intel trained Electronics tech (1990’s) so fixing it was easy, well snipping pins and cleaning the solder pads anyway wasn’t a big deal at all. So now I have to wait for a new PCIe slot to come in from china and fix it for reals, lol. I just don’t like the idea of doing this work and not knowing if I will succeed or if this board will even work with IOMMU. Being B350 and reports of B350 not working has me greatly worried, even though I clearly have a AMD-Vi and IOMMU settings in the bios. I have each enabled already because I was using VMware Player on Windows a few months back, just for testing Linux distros of course. But this QEMU-KVM thing seems far more interesting to me. It has been a 20 year long dream of mine to run two Operating Systems side by side on one powerful machine with both having bare metal performance. But dual boot is just so incredibly annoying, haha. Anyway in a few weeks time I should have this board repaired and be preparing to finally try this GPU pass through experiment.

    One thing I want to ask since you have so much experience with this. Do you know if Windows will still have the same Ethernet performance (I have a Intel PCIe x1 Ethernet card waiting on the side and a really good USB 3.1 card as well both for PCIe Pass Through use) and will RDP from my Windows 10 guest still access my Windows 2016 based Server? I assume so. My Server is headless and in another room entirely, it doesn’t even have a GPU so RDP performance is a must. Also will I have any issues using a GTX 550Ti for the Host Linux Distro on a PCIe 3.0 X4 slot? And one last thought I have, is there a way to “switch” displays on the fly? See, one of my displays is a superior 27″ 2560 x 1440p 120Hz IPS gaming monitor (with only one DVI input) and the other is an old Dell 24″ with many inputs, PIP and a USB 2.0 hub built in. I would want to use the 1440p for both Linux and Windows as I focus my work. So I assume with pass through there is no dragging of VM windows any more, am I correct in that assumption? One monitor is for Linux and the other for Windows ONLY? This outcome would be awful as that IPS is far superior for browser reading (Linux use), AND also far superior for windows gaming. The Dell obviously has an inferior display tech. Being I plan on using Windows for Steam and standalone game installs I have to use the IPS 120Hz monitor for that, which means giving the Linux host the 12 year old Dell crap monitor. Manjaro looks beautiful on that 27″ IPS though. Haha

    Anyway, I would buy a new X470 board to try this but I have to wait because I promised myself that my next system upgrade would be the AMD X570 chipset (with PCIE 4.0) and a 16 core 32 thread Ryzen 9, if it exists this year. Both are not yet released, so I have to wait. I figure if I play around with this Ryzen system now I will at least gain knowledge on how to do this with AMD when I get serious with it later this year or next year as I expect new hardware needs time to mature. This also affords me the opportunity to get more serious with Linux as my main OS. I think a system with 16 cores, maybe 10 for Windows and 6 for Linux (or 8 x 8) would equal out both the host and guest and be the perfect VM pass through setup (A dream machine IMO). No more dual booting and no more losing hardware resources for one or the other. Both Linux and Windows should feel full performance still I would think.

    Anyway, sorry for the long post but I thank you for the reply. I hope you have some thoughts on my plans above because this full performance VM idea has been a dream of mine for a VERY long time and the closest I have ever come has been having TWO machines side by side and using Synergy KVM software which works surprisingly well. Thanks again.

    Rod

  32. Hello Rod,

    First things first: I don’t have any hands-on experience with AMD / Ryzen. I did read about some issues when Ryzen started to appear, but believe that things have been worked out. I can’t say anything about your B350 m/b. The key questions as to the motherboard are:
    1. Does it support IOMMU or SVM in AMD talk? In your case it seems to support it.
    2. How well does it group the PCI devices? This is a key question you will only really know to answer when you fixed the board and inserted your second GPU.

    As a first step, (using Manjaro) enable IOMMU and check the IOMMU grouping as described in my tutorial. If you see many devices in the same IOMMU group and very bad separation, this might rule out the board.

    LAN performance: You should use static IP for the host and your network devices. Then create a bridge to connect your Windows VM. If properly configured (both the bridge AND the qemu options AND installing the VFIO driver under Windows), the bridge runs at 10GBit which should be plenty. Communication with the other network devices will be with your host’s Ethernet link – there is no use to have a second link.

    Screens: Your 27″ display having only ONE input does make it a little harder if you want to use it for both Linux and Windows. However, there are DVI KVM (keyboard, video, mouse) switches that you can use to switch between displays – see also Virtualization Hardware Accessories. This KVM switch would also make Synergy superfluous.

    I myself use only one screen, but have multiple inputs I can select via dedicated button on the screen.

    One of the real challenges with kvm GPU passthrough is audio latency and crackling noise. Manjaro (versus Linux Mint) uses a more up-to-date version of qemu / kvm which should help solve this issue, as some development has been done on the audio support.

    The potential downside of Manjaro is that as a rolling distribution it is constantly updated, increasing the risk of breaking things. Linux Mint is very conservative, on the other hand, and it’s packages sometimes seem ancient. But for me stability has a much higher value than running the latest and greatest. I simply can’t afford wasting time on fixing a broken system or packages. That is not to say that Manjaro isn’t stable, it just has a higher breakage potential.

    Hope this answers the questions.

    1. Thanks for the reply again.

      Yeah I should look into a KVM but I think those are just as expensive as just getting a newer multi-port display of equal size and display quality without the gaming features. I’m sure I could find another 27″ display that has equal PQ to this one. I will weigh my options though.

      Thanks for the rest of the ideas and tips, I will keep everything in my thoughts and figure out what I need to do as I may go back to LM anyway. I already tried some of the commands just to see what it spits out and so far everything looks good with IOMMU. I just need to see what changes when I add another GPU into the system. I don’t even know yet if that PCIe slot will work after I solder one in because PCIe slots are always powered and when the port breaks you can get shorts that damage traces or pins on the CPU socket (and the pins I snipped off did look burnt), so I still worry about that. But I have a feeling its going to work fine. I have another option here too, Zotac built a GT 710 PCIe x1 card a while back (Linux won’t mind that) that I could buy as I have 3 unused PCIe X1 slots, but I’d rather just try repairing the board first as I have plenty of GPU’s already lying around. Also, X570 is almost here so I may not even be using this Motherboard 4 to 6 months down the road. This will be the year I upgrade the main gaming GPU, CPU and Motherboard, although I don’t game that much. We’ll see soon what I end up doing…

      Thanks again, and thanks for writing this guide and helping users out. This looks like it will be a fun project for me.

      Best Regards
      Rod

    1. @Rod: Thanks Rod. I hope to be able to try out Manjaro Linux and Qemu 4.0 soon. Sorry for having mistakenly deleted your account.

  33. I am using VIrtual Machine Manager(VMM) and want to passthrough host USB to windows guest VM. I added a new usb hardware in VMM but the usb still cannot be recognised?
    I tried to edit the configuration file but the format shown above is not the same as mine as below?
    Let say the usb I want to pass through is :

    Bus 002 Device 002: ID 0782:5581 SanDisk Corp.

    My original VM configuration:

    —————————-
    How should I edit my the VM file? Can someone help?

    1. I’m using a bash script that starts qemu with the necessary command line options. VMM uses an XML file that is then translated to a qemu command which is passed to the shell. Some users swear by VMM, others like me find it more difficult.

      The syntax of the XML created by VMM is not the same as the command line syntax, so you can’t just take my script and copy/paste it into the XML file.

      However, if you go to my reference section, you should find enough links to other tutorials that do use VMM.

      Good luck!

  34. Hey Heiko,
    First of all big thanks for creating such a wonderful tutorial for the masses, this is much appreciated .
    I’m trying to to experiment with gpu passthrough.

    I get below result for lspci -nn | grep 02:00.

    02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1f08] (rev a1)
    02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10f9] (rev a1)
    02:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1ada] (rev a1)
    02:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device [10de:1adb] (rev a1)

    All the four items are in same group.

    /sys/kernel/iommu_groups/12/devices/0000:02:00.0
    /sys/kernel/iommu_groups/12/devices/0000:02:00.1
    /sys/kernel/iommu_groups/12/devices/0000:02:00.2
    /sys/kernel/iommu_groups/12/devices/0000:02:00.3

    I managed to get vfio-pci driver to work for the 2.00.0 (i.e. VGA), 2:00.1 (i.e. Audio) and also for 02:00.3(i.e. serial bus) but not the usb one.

    I only want to passthrough audio and VGA but when I tried starting the VM it said that “cannot passthrough as iommu group has other devices” and thus I made an attempt to passthrough the other devices in group, but failed.

    I tried moving the card to other slots but I always see 4 devices i.e. VGA, Audio, USB and serial bus from my card.

    Can you provide some directions?

    1. This is the message i get
      (qemu) qemu-system-x86_64: -device vfio-pci,host=02:00.0,multifunction=on: vfio error: 0000:02:00.0: group 12 is not viable
      Please ensure all devices within the iommu_group are bound to their vfio bus driver.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.