Running Windows 10 on Linux using KVM with VGA Passthrough

The Need

You want to use Linux as your main operating system, but still need Windows for certain applications unavailable under Linux. You need top notch (3D) graphics performance under Windows for computer games, photo or video editing, etc. And you do not want to dual-boot into Linux or Windows. In that case read on.

Many modern CPUs have built-in features that improve the performance of virtual machines (VM), up to the point where virtualised systems are indistinguishable from non-virtualised systems. This allows us to create virtual machines on a Linux host platform without compromising performance of the (Windows) guest system.

For some benchmarks of my current system, see Windows 10 Virtual Machine Benchmarks

The Solution

In the tutorial below I describe how to install and run Windows 10 as a KVM virtual machine on a Linux Mint or Ubuntu host. The tutorial uses a technology called VGA passthrough (also referred to as “GPU passthrough” or “vfio” for the vfio driver used) which provides near-native graphics performance in the VM. I’ve been doing VGA passthrough since summer 2012, first running Windows 7 on a Xen hypervisor, switching to KVM and Windows 10 in December 2015. The performance – both graphics and computing – under Xen and KVM has been nothing less than stellar!

The tutorial below will only work with suitable hardware! If your computer does not fulfill the basic hardware requirements outlined below, you won’t be able to make it work.

The tutorial is not written for the beginner! I assume that you do have some Linux background, at least enough to be able to restore your system when things go wrong.

I am also providing links to other, similar tutorials that might help. Last not least, you will find links to different forums and communities where you can find further information and help.

Note: The tutorial was originally posted on the Linux Mint forum.

Disclaimer

All information and data provided in this tutorial is for informational purposes only. I make no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this tutorial and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. All information is provided on an as-is basis.

You are aware that by following this tutorial you may risk the loss of data, or may render your computer inoperable. Backup your computer! Make sure that important documents/data are accessible elsewhere in case your computer becomes inoperable.

For a glossary of terms used in the tutorial, see Glossary of Virtualization Terms.

Tutorial

Note for Ubuntu users: My tutorial uses the “xed” command found in Linux Mint Mate to edit documents. You will have to replace it with “gedit” or whatever editor you use in Ubuntu/Xubuntu/Lubuntu…

Important Note: This tutorial has been edited to reflect the latest Linux Mint 19 and Ubuntu 18.04 syntax. If you use an older release, make sure to use the appropriate commands.

Part 1 – Hardware Requirements

For this tutorial to succeed, your computer hardware must fulfill all of the following requirements:

IOMMU support

In Intel jargon its called VT-d. AMD calls it variously AMD Virtualisation, AMD-Vi, or Secure Virtual Machine (SVM). Even IOMMU has surfaced. If you plan to purchase a new PC/CPU, check the following websites for more information:

Unfortunately IOMMU support, specifically ACS (Access Control Services) support, varies greatly between different Intel CPUs and CPU generations. Generally speaking, Intel provides better ACS or device isolation capabilities for its Xeon and LGA2011 high-end desktop CPUs than for other VT-d enabled CPUs.

The first link above provides a non-comprehensive list of CPU/motherboard/GPU configurations where users were successful with GPU passthrough. When building a new PC, make sure you purchase components that support GPU passthrough.

Most PC / motherboard manufacturers disable IOMMU by default. You will have to enable IOMMU it in the BIOS. To check your current CPU / motherboard IOMMU support and enable it, do the following:

  1. Reboot your PC and enter the BIOS setup menu (usually you press F2, DEL, or similar during boot to enter the BIOS setup).
  2. Search for IOMMU, VT-d, SVM, or “virtualisation technology for directed IO” or whatever it may be called on your system. Turn on VT-d / IOMMU.
  3. Save and Exit BIOS and boot into Linux.
  4. Edit the /etc/default/grub file (you need root permission to do so). Open a terminal window (Ctrl+Alt+T) and enter (copy/paste):
    xed admin:///etc/default/grub
    (use gksudo gedit /etc/default/grub for older Linux Mint/Ubuntu releases)
    Here is my /etc/default/grub file before the edit:
    GRUB_DEFAULT=0
    #GRUB_HIDDEN_TIMEOUT=10
    #GRUB_HIDDEN_TIMEOUT_QUIET=true
    GRUB_TIMEOUT_STYLE=countdown
    GRUB_TIMEOUT=0
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT=”quiet”
    GRUB_CMDLINE_LINUX=””

    Look for the line that starts with GRUB_CMDLINE_LINUX_DEFAULT=”…”. You need to add one of the following options to this line, depending on your hardware:
    Intel CPU:
    intel_iommu=on
    AMD CPU:
    amd_iommu=on
    Save the file and exit. Then type:
    sudo update-grub
  5. Now check that IOMMU is actually supported. Reboot the PC. Open a terminal window.
    On AMD machines use:
    dmesg | grep AMD-Vi
    The output should be similar to this:

    AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
    AMD-Vi: Lazy IO/TLB flushing enabled
    AMD-Vi: Initialized for Passthrough Mode

    Or use:
    cat /proc/cpuinfo | grep svm

On Intel machines use:
dmesg | grep "Virtualization Technology for Directed I/O"

The output should be this:
[ 0.902214] DMAR: Intel(R) Virtualization Technology for Directed I/O

If you do not get this output, then VT-d or AMD-V is not working – you need to fix that before you continue! Most likely it means that your hardware (CPU) doesn’t support IOMMU, in which case there is no point continuing this tutorial 😥 . Check again to make sure your CPU supports IOMMU. If yes, the cause may be a faulty motherboard BIOS. See if you find a newer version and update your motherboard BIOS (be careful, flashing the BIOS can potentially brick your motherboard).

Two graphics processors

In addition to a CPU and motherboard that supports IOMMU, you need two graphics processors (GPU):

1. One GPU for your Linux host (the OS you are currently running, I hope);

2. One GPU (graphics card) for your Windows guest.

We are building a system that runs two operating systems at the same time. Many resources like disk space, memory, etc. can be switched forth and back between the host and the guest, as needed. Unfortunately the GPU cannot be switched or shared between the two OS, at least not in an easy way. (There are ways to reset the graphics card as well as the X server in Linux so you could get away with one graphics card, but I personally believe it’s not ideal. See for example here and here for more on that.)

If, like me, you use Linux for the everyday stuff such as emails, web browsing, documents, etc., and Windows for gaming, photo or video editing, you’ll have to give Windows a more powerful GPU, while Linux will run happily with an inexpensive GPU, or the integrated graphics processor (IGP). (You can also create a Linux VM with GPU passthru if you need Linux for gaming or graphics intensive applications.)

UEFI support in the GPU used with Windows

In this tutorial I use UEFI to boot the Windows VM. That means that the graphics card you are going to use for the Windows guest must support UEFI – most newer cards do. You can check here if your video card and BIOS support UEFI. If you run Windows, download and run GPU-Z and see if there is a check mark next to UEFI. (For more information, see here.)

Nvidia GTX 970
GPU-Z with graphics card details

There are several advantages to UEFI, namely it starts faster and overcomes some issues associated with legacy boot (Seabios).

If you plan to use the Intel IGD (integrated graphics device) for your Linux host, UEFI boot is the way to go. UEFI overcomes the VGA arbitration problem associated with the IGD and the use of the legacy Seabios.
If, for some reason, you cannot boot the VM using UEFI, and you want to use the Intel IDG for the host, you need to compile the i915 VGA arbiter patch into the kernel. Before you do, check the note below. For more on VGA arbitration, see here. For the i915 VGA arbiter patch, look here or under Part 15 – References.

Note: If your GPU does NOT support UEFI, there is still hope. You might be able to find a UEFI BIOS for your card at TechPowerUp Video BIOS Collection. A Youtube blogger calling himself Spaceinvader has produced a very helpful video on using a VBIOS.

If there is no UEFI video BIOS for your Windows graphics card, you will have to look for a tutorial using the Seabios method. It’s not much different from this here, but there are some things to consider.

Laptop users with Nvidia Optimus technology: Misairu_G (username) published an in-depth guide to VGA passthrough on laptops using Nvidia Optimus technology – see GUIDE to VGA passthrough on Nvidia Optimus laptops. (For reference, here some older posts on the subject: https://forums.linuxmint.com/viewtopic.php?f=231&t=212692&p=1300764#p1300634.)

Part 2 – Installing Qemu / KVM

The Qemu release shipped with Linux Mint 19 is version 2.11 and supports the latest KVM features.

In order to have Linux Mint “remember” the installed packages, use the Software Manager to install the following packages:

qemu-kvm
qemu-utils
seabios
ovmf
hugepages
cpu-checker

Linux Mint
Software Manager

For AMD Ryzen, see also here (note that Linux Mint 19/Ubuntu 18.04 only require the BIOS update).

Alternatively, use
sudo apt install qemu-kvm qemu-utils seabios ovmf hugepages cpu-checker
to install the required packages.

Part 3 – Determining the Devices to Pass Through to Windows

We need to find the PCI ID(s) of the graphics card and perhaps other devices we want to pass through to the Windows VM. Normally the IGP (the GPU inside the processor) will be used for Linux, and the discrete graphics card for the Windows guest. My CPU does not have an integrated GPU, so I use 2 graphics cards. Here my hardware setup:

  • GPU for Linux: Nvidia Quadro 2000 residing in the first PCIe graphics card slot.
  • GPU for Windows: Nvidia GTX 970 residing in the second PCIe graphics card slot.

To determine the PCI bus number and PCI IDs, enter:
lspci | grep VGA

Here is the output on my system:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation Device 13c2 (rev a1)

The first card under 01:00.0 is the Quadro 2000 I want to use for the Linux host. The other card under 02:00.0 I want to pass to Windows.

Modern graphics cards usually come with an on-board audio controller, which we need to pass through as well. To find its ID, enter:
lspci -nn | grep 02:00.

Substitute “02:00.” with the bus number of the graphics card you wish to pass to Windows, without the trailing “0“. Here is the output on my computer:
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c2] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)

Write down the bus numbers (02:00.0 and 02:00.1 above), as well as the PCI IDs (10de:13c2 and 10de:0fbb in the example above).

Now check to see that the graphics card resides within its own IOMMU group:
find /sys/kernel/iommu_groups/ -type l

For a sorted list, use:
for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort --version-sort

Look for the bus number of the graphics card you want to pass through. Here is the (shortened) output on my system:

/sys/kernel/iommu_groups/19/devices/0000:00:1f.3
/sys/kernel/iommu_groups/20/devices/0000:01:00.0
/sys/kernel/iommu_groups/20/devices/0000:01:00.1
/sys/kernel/iommu_groups/21/devices/0000:02:00.0
/sys/kernel/iommu_groups/21/devices/0000:02:00.1
/sys/kernel/iommu_groups/22/devices/0000:05:00.0
/sys/kernel/iommu_groups/22/devices/0000:06:04.0

Make sure the GPU and perhaps other PCI devices you wish to pass through reside within their own IOMMU group. In my case the graphics card and its audio controller designated for passthrough both reside in IOMMU group 21. No other PCI devices reside in this group, so all is well.

If your VGA card shares an IOMMU group with other PCI devices, see IOMMU GROUP CONTAINS ADDITIONAL DEVICES for a solution!

Next step is to find the mouse and keyboard (USB devices) that we want to assign to the Windows guest. Remember, we are going to run 2 independent operating systems side by side, and we control them via mouse and keyboard.


About keyboard and mouse

Depending whether and how much control you want to have over each system, there are different approaches:

1. Get a USB-KVM (Keyboard/VGA/Mouse) switch. This is a small hardware device with usually 2 USB ports for keyboard and mouse as well as a VGA or (the more expensive) DVI or HDMI graphics outputs. In addition the USB-KVM switch has two USB cables and 2 VGA/DVI/HDMI cables to connect to two different PCs. Since we run 2 virtual PCs on one single system, this is viable solution. See also my Virtualization Hardware Accessories post.
Advantages:
– Works without special software in the OS, just the usual mouse and keyboard drivers;
– Best in performance – no software overhead.
Disadvantages:
– Requires extra (though inexpensive) hardware;
– More cable clutter and another box with cables on your desk;
– Requires you to press a button to switch between host and guest and vice versa;
– Need to pass through a USB port or controller – see below on IOMMU groups.

2. Without spending a nickel you can simply pass through your mouse and keyboard when the VM starts.
Advantages:
– Easy to implement;
– No money to invest;
– Good solution for setting up Windows.
Disadvantages:
– Once the guest starts, your mouse and keyboard only control that guest, not the host. You will have to plug them into another USB port to gain access to the host.

3. Synergy (http://symless.com/synergy/) is a commercial software solution that, once installed and configured, allows you to interact with two PCs or virtual machines.
Advantages:
– Most versatile solution, especially with dual screens;
– Software only, easy to configure;
– No hardware purchase required.
Disadvantages:
– Requires the installation of software on both the host and the guest;
– Doesn’t work during Windows installation (see option 2);
– Costs $10 for a Basic, lifetime license;
– May produce lag, although I doubt you’ll notice unless there is something wrong with the bridge configuration.

4. “Multi-device” bluetooth keyboard and mouse that can connect to two different devices and switch between them at the press of a button (see for example here):
Advantages:
– Most convenient solution;
– Same performance as option 1.
Disadvantages:
– Price.
– Make sure the device supports Linux, or that you can return it if it doesn’t!

I first went with option 1 for robustness and universality, but have replaced it with option 4. I’m now using a Logitech MX master BT mouse and a Logitech K780 BT keyboard. See here for how to pair these devices to the USB dongles.

Both option 1 and 4 usually require to pass through a USB PCI device to the Windows guest. I had a need for both USB2 and USB3 ports in my Windows VM and I was able to pass through two USB controllers to my Windows guest, using PCI passthrough.


For the VM installation we choose option 2 (see above), that is we pass our keyboard and mouse through to the Windows VM. To do so, we need to identify their USB ID:
lsusb

Here my system output (truncated):

Bus 010 Device 006: ID 045e:076c Microsoft Corp. Comfort Mouse 4500
Bus 010 Device 005: ID 045e:0750 Microsoft Corp. Wired Keyboard 600

Note down the IDs: 045e:076c and 045e:0750 in my case.

Part 4 – Prepare for Passthrough

We are assigning a dummy driver vfio-pci to the graphics card we want to use under Windows. To do that, we first have to prevent the default driver to bind to the graphics card.

Once more edit the /etc/default/grub file:
xed admin:///etc/default/grub

The entry we are looking for is:

GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on”

We need to add one of the following options to this line, depending on your hardware:

Nvidia graphics card for Windows VM:
modprobe.blacklist=nouveau

Note: This blacklists the Nouveau driver and prevents it from loading. If you run two Nvidia cards and use the open Nouveau driver for your Linux host, DON’T blacklist the driver!!! Chances are the tutorial will work since the vfio-pci driver should grab the graphics card before nouveau takes control of it. The same goes for AMD cards (see below).

On some platforms, the above modprobe.blacklist=nouveau option won’t be enough. A service called nvidia-fallback.service may be running and needs to be disabled. To check for this service, use:
systemctl list-units --type service | grep -i nvidia-fallback

If the nvidia-fallback service shows up, disable it via:
systemctl disable nvidia-fallback.service
(Thanks Woody!)

AMD graphics card for Windows VM:
modprobe.blacklist=radeon
or
modprobe.blacklist=amdgpu
depending on which driver loads when you boot.

After editing, my /etc/default/grub file now looks like this:
GRUB_DEFAULT=0
#GRUB_HIDDEN_TIMEOUT=10
#GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT_STYLE=countdown
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=”modprobe.blacklist=nouveau quiet intel_iommu=on”
GRUB_CMDLINE_LINUX=””

Finally run update-grub:
sudo update-grub

(There are more ways to blacklist driver modules, for example using Kernel Mode Settings. For more on Kernel Mode Setting, see https://wiki.archlinux.org/index.php/kernel_mode_setting.)

In order to make the graphics card available to the Windows VM, we will assign a “dummy” driver as a place holder: vfio-pci.

Note: If you have two identical graphics cards for both the host and the VM, the method below won’t work. In that case see the following posts: https://forums.linuxmint.com/viewtopic.php?f=231&t=212692&start=40#p1174032 as well as https://forums.linuxmint.com/viewtopic.php?f=231&t=212692&start=40#p1173262.

Open or create /etc/modprobe.d/local.conf:
xed admin:///etc/modprobe.d/local.conf
and insert the following:
options vfio-pci ids=10de:13c2,10de:0fbb
where 10de:13c2 and 10de:0fbb are the PCI IDs for your graphics card’ VGA and audio part.
Save the file and exit the editor.

Some applications like Passmark require the following option:
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf

To load vfio and other required modules at boot, edit the /etc/initramfs-tools/modules file:
xed admin:///etc/initramfs-tools/modules

At the end of the file add in the order listed below:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vhost-net

Save and close the file.

We need to update the initramfs. Enter at the command line:
update-initramfs -u

Part 5 – Network Settings

For performance reasons it is best to create a virtual network bridge that connects the VM with the host. In a separate post I have written a detailed tutorial on how to set up a bridge using Network Manager.

Note: Bridging only works for wired networks. If your PC is connected to a router via a wireless link (Wifi), you won’t be able to use a bridge. The easiest way to get networking inside the Windows VM is NOT to use any network setup. You also need to delete the network configuration in the qemu command (script). If you still want to use a bridged network, there are workarounds such as routing or ebtables (see https://wiki.debian.org/BridgeNetworkConnections#Bridging_with_a_wireless_NIC).

Once you’ve setup the network, reboot the computer and test your network configuration – open your browser and see if you have Internet access.

Part 6 – Setting up Hugepages

Moved to Part 18 – Performance Tuning. This is a performance tuning measure and not required to run Windows on Linux. See Configure Hugepages under Part 18 – Performance Tuning.

Part 7 – Download the VFIO drivers

Download the VFIO driver ISO to be used with the Windows installation from https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/index.html. Below are the direct links to the ISO images:

Latest VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso

Stable VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso

I chose the latest driver ISO.

Part 8 – Prepare Windows VM Storage Space

We need some storage space on which to install the Windows VM. There are several choices:

  1. Create a raw image file.
    Advantages:
    – Easy to implement;
    – Flexible – the file can grow with your requirements;
    – Snapshots;
    – Easy migration;
    – Good performance.
    Disadvantages:
    – Takes up the entire space you specify.
  2. Create a dedicated LVM volume.
    Advantages:
    – Familiar technology (at least to me);
    – Excellent performance, like bare-metal;
    – Flexible – you can add physical drives to increase the volume size;
    – Snapshots;
    – Mountable within Linux host using kpartx.
    Disadvantages:
    – Takes up the entire space specified;
    – Migration isn’t that easy.
  3. Pass through a PCI SATA controller / disk.
    Advantages:
    – Excellent performance, using original Windows disk drivers;
    – Allows the use of Windows virtual drive features;
    – Can use an existing bare-metal installation of Windows in a VM;
    – Possibility to boot Windows directly, i.e. not as VM;
    – Possible to add more drives.
    Disadvantages:
    – The PC needs at least two discrete SATA controllers;
    – Host has no access to disk while VM is running;
    – Requires a dedicated SATA controller and drive(s);
    – SATA controller must have its own IOMMU group;
    – Possible conflicts in Windows between bare-metal and VM operation.

For further information on these and other image options, see here: https://en.wikibooks.org/wiki/QEMU/Images

Although I’m using an LVM volume, I suggest you start with the raw image. Let’s create a raw disk image:

qemu-img create -f raw -o preallocation=full /media/user/win.img 100G

for performance, or simply: fallocate -l 100G /media/user/win.img
Note: Adjust size (100G) and path to match your needs or resources.

Part 9 – Check Configuration

It’s best to check that we got everything:

KVM: kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

KVM module: lsmod | grep kvm
kvm_intel 200704 0
kvm 593920 1 kvm_intel
irqbypass 16384 2 kvm,vfio_pci

Above is the output for the Intel module.

VFIO: lsmod | grep vfio
vfio_pci 45056 0
vfio_virqfd 16384 1 vfio_pci
irqbypass 16384 2 kvm,vfio_pci
vfio_iommu_type1 24576 0
vfio 32768 2 vfio_iommu_type1,vfio_pci

QEMU: qemu-system-x86_64 --version
You need QEMU emulator version 2.5.0 or newer. On Linux Mint 19 / Ubuntu 18.04 the QEMU version is 2.11.

Did vfio load and bind to the graphics card?
lspci -kn | grep -A 2 02:00
where 02:00 is the bus number of the graphics card to pass to Windows. Here the output on my PC:
02:00.0 0300: 10de:13c2 (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci
02:00.1 0403: 10de:0fbb (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci

Kernel driver in use is vfio-pci. It worked!

Interrupt remapping: dmesg | grep VFIO
[ 3.288843] VFIO – User Level meta-driver version: 0.3
All good!

If you get this message:
vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param “allow_unsafe_interrupts” to enable VFIO IOMMU support on this platform
enter the following command in a root terminal (or use sudo -i):
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio_iommu_type1.conf
In this case you need to reboot once more.

Part 10 – Create Script to Start Windows

To create and start the Windows VM, copy the script below and safe it as windows10vm.sh (or whatever name you like, just keep the .sh extension):


#!/bin/bash

vmname="windows10vm"

if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
echo "A passthrough VM is already running." &
exit 1

else

# use pulseaudio
export QEMU_AUDIO_DRV=pa
export QEMU_PA_SAMPLES=8192
export QEMU_AUDIO_TIMER_PERIOD=99
export QEMU_PA_SERVER=/run/user/1000/pulse/native

cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd

qemu-system-x86_64 \
-name $vmname,process=$vmname \
-machine type=q35,accel=kvm \
-cpu host,kvm=off \
-smp 4,sockets=1,cores=2,threads=2 \
-m 8G \
-balloon none \
-rtc clock=host,base=localtime \
-vga none \
-nographic \
-serial none \
-parallel none \
-soundhw hda \
-usb \
-deviceusb-host,vendorid=0x045e,productid=0x076c \
-device usb-host,vendorid=0x045e,productid=0x0750 \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=dc \
-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img \
-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

exit 0
fi


Make the file executable:
sudo chmod +x windows10vm.sh

You need to edit the file and change the settings and paths to match your CPU and configuration. See below for explanations on the qemu-system-x86 options:

-name $vmname,process=$vmname
Name and process name of the VM. The process name is displayed when using ps -A to show all processes, and used in the script to determine if the VM is already running. Don’t use win10 as process name, for some inexplicable reason it doesn’t work!

-machine type=q35,accel=kvm
This specifies a machine to emulate. The accel=kvm option tells qemu to use the KVM acceleration – without it the Windows guest will run in qemu emulation mode, that is it’ll run real slow.
I have chosen the type=q35 option, as it improved my SSD read and write speeds. See https://wiki.archlinux.org/index.php/QEMU#Virtual_machine_runs_too_slowly. In some cases type=q35 will prevent you from installing Windows, instead you may need to use type=pc,accel=kvm. See the post here. To see all options for type=…, enter the following command:
qemu-system-x86_64 -machine help
Important: Several users passing through Radeon RX 480 and Radeon RX 470 cards have reported reboot loops after updating and installing the Radeon drivers. If you pass through a Radeon graphics card, it is better to replace the -machine line in the startup script with the following line:
-machine type=pc,accel=kvm
to use the default i440fx emulation.
Note for IGD users: If you have an Intel CPU with internal graphics (IGD), and want to use the Intel IGD for Windows, there is a new option to enable passthrough:
igd-passthru=on|off controls IGD GFX passthrough support (default=off).
In most cases you will want to use a discrete graphics card with Windows.

-cpu host,kvm=off
This tells qemu to emulate the host’s exact CPU. There are more options, but it’s best to stay with host.
The kvm=off option is only needed for Nvidia graphics cards – if you have an AMD/Radeon card for your Windows guest, you can remove that option and specify -cpu host.

-smp 4,sockets=1,cores=2,threads=2
This specifies multiprocessing. -smp 4 tells the system to use 4 (virtual) processors. My CPU has 6 cores, each supporting 2 threads, which makes a total of 12 threads. It’s probably best not to assign all CPU resources to the Windows VM – the host also needs some resources (remember that some of the processing and I/O coming from the guest takes up CPU resources in the host). In the above example I gave Windows 4 virtual processors. sockets=1 specifies the number of actual CPU sockets qemu should assign, cores=2 tells qemu to assign 2 processor cores to the VM, and threads=2 specifies 2 threads per core. It may be enough to simply specify -smp 4, but I’m not sure about the performance consequences (if any).
If you have a 4-core Intel CPU with hyper-threading, you can specify -smp 6,sockets=1,cores=3,threads=2 to assign 75% of your CPU resources to the Windows VM. This should usually be enough even for demanding games and applications.
Note: If your CPU doesn’t support hyper-threading, specify threads=1.

-m 8G
The -m option assigns memory (RAM) to the VM, in this case 8 GByte. Same as -m 8192. You can increase or decrease it, depending on your resources and needs. With modern Windows releases it doesn’t make sense to give it less than 4G, unless you are really stretched with RAM. If you use hugepages, make sure your hugepage size matches this!

-mem-path /dev/hugepages
This tells qemu where to find the hugepages we reserved. If you haven’t configured hugepages, you need to remove this option.

-mem-prealloc
Preallocates the memory we assigned to the VM.

-balloon none
We don’t want memory ballooning (as far as I know Windows won’t support it anyway).

-rtc clock=host,base=localtime
-rtc clock=host tells qemu to use the host clock for synchronization. base=localtime allows the Windows guest to use the local time from the host system. Another option is utc.

-vga none
Disables the built in graphics card emulation. You can remove this option for debugging.

-nographic
Totally disables SDL graphical output. For debugging purposes, remove this option if you don’t get to the Tiano Core screen.

-serial none
-parallel none
Disable serial and parallel interfaces. Who needs them anyway?

-soundhw hda
Together with the export QEMU_AUDIO_DRV=pa shell command, this option enables sound through PulseAudio.
If you want to pass through a physical audio card or audio device and stream audio from your Linux host to your Windows guest, see here: Streaming Audio from Linux to Windows.

-usb
-device usb-host,vendorid=0x045e,productid=0x076c
-device usb-host,vendorid=0x045e,productid=0x0750
-usb enables USB support and -device usb-host… assigns the USB host devices mouse (045e:076c) and keyboard (045e:0750) to the guest. Replace the device IDs with the ones you found using the lsusb command in Part 3 above!
Note the new syntax. There are also many more options that you can find here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.
There are three options to assign host devices to guests. Here the syntax:
-usb \
-device usb-kbd \
-device usb-mouse \
passes through the keyboard and mouse to the VM. When using this option, remove the -vga none and -nographic options from the script to enable switching back and forth between Windows VM and Linux host using CTRL+ALT.
-usb \
-device usb-host,hostbus=bus,hostaddr=addr \

passes through the host device identified by bus and addr.
-usb \
-device usb-host,vendorid=vendor,productid=product \

passes through the host device identified by vendor and product ID

-device vfio-pci,host=02:00.0,multifunction=on
-device vfio-pci,host=02:00.1
Here we specify the graphics card to pass through to the guest, using vfio-pci. Fill in the PCI IDs you found under Part 3 above. It is a multifunction device (graphics and sound). Make sure to pass through both the video and the sound part (02:00.0 and 02:00.1 in my case).

-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd
Specifies the location and format of the bootable OVMF UEFI file. This file doesn’t contain the variables, which are loaded separately (see right below).

-drive if=pflash,format=raw,file=/tmp/my_vars.fd
These are the variables for the UEFI boot file, which were copied by the script to /tmp/my_vars.fd.

-boot order=dc
Start boot from CD (d), then first hard disk (c). After installation of Windows you can remove the “d” to boot straight from disk.

-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img
Defines the first hard disk. With the options above it will be accessed as a paravirtualized (if=virtio) drive in raw format (format=raw).
Important: file=/… enter the path to your previously created win.img file.
Other possible drive options are file=/dev/mapper/group-vol for LVM volumes, or file=/dev/sdx1 for entire disks or partitions.
For some basic -drive options, see my post here. For the new Qemu syntax and drive performance tuning, see Tuning VM Disk Performance.

-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
This attaches the Windows win10.iso as CD or DVD. The driver used is the ide-cd driver.
Important: file=/… enter the path to your Windows ISO image.
Note: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
This attaches the virtio ISO image as CD. Note the different index.
Important: file=/… enter the path to your virtio ISO image. If you downloaded it to the default location, it should be in your Downloads directory.
Note 1: There are many ways to attach ISO images or drives and invoke drivers. My system didn’t want to take a second scsi-cd device, so this option did the job. Unless this doesn’t work for you, don’t change.
Note 2: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
Defines the network interface and network driver. It’s best to define a MAC address, here 00:16:3e:00:01:01. The MAC is specified in Hex and you can change the last :01:01 to your liking. Make sure no two MAC addresses are the same!
vhost=on is optional – some people reported problems with this option. It is for network performance improvement.
For more information: https://wiki.archlinux.org/index.php/QEMU#Network and https://wiki.archlinux.org/index.php/QEMU#Networking.

Important: Documentation on the installed QEMU can be found here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.

The latest stable version is 2.11. For additional documentation on QEMU, see https://www.qemu.org/documentation/. Some configuration examples can be found in the following directory:
/usr/share/doc/qemu-system-common/config

Part 11 – Install Windows

Start the VM by running the script as root:
sudo ./windows10vm.sh
(Make sure you specify the correct path.)

You should get a Tiano Core splash screen with the memory test result.

You might land in an EFI shell. Type exit and you should be getting a menu. Select the boot disk and hit Enter.

Then the Windows ISO boots and asks you to:
Press any key to start the CD / DVD…

Press a key!

Windows will then ask you to:
Select the driver to install

Click “Browse”, then select your VFIO ISO image and go to “viostor“, open and select your Windows version (w10 for Windows 1o), then select the “AMD64” version for 64 bit systems, click OK. (Note: Instead of the viostor driver, you can also install the vioscsi driver. See qemu documentation for proper syntax in the qemu command.)

Windows will ask for the license key, and you need to specify how to install – choose “Custom”. Then select your drive (there should be only disk0) and install.

Windows may reboot several times. When done rebooting, open Device Manager and select the Network interface. Right-click and select update. Then browse to the VFIO disk and install NetKVM.

Windows should be looking for a display driver by itself. If not, install it manually.

Note: In my case, Windows did not correctly detect my drives being SSD drives. Not only will Windows 10 perform unnecessary disk optimization tasks, but these “optimizations” can actually lead to reduced SSD life and performance issues. To make Windows 10 determine the correct disk drive type, do the following:

1. Inside Windows 10, right-click the Start menu.
2. Select “Command prompt (admin)”.
3. At the command prompt, run:
winsat formal
4. It will run a while and then print the Windows Experience Index (WEI).
5. Please share your WEI in a comment below!

To check that Windows correctly identified your SSD:
1. Open Explorer
2. Click “This PC” in the left tab.
3. Right-click your drive (e.g. C:) and select “Properties”.
4. Select the “Tools” tab.
5. Click “Optimize”
You should see something similar to this:

SSD optimization
Use Optimize Drives to optimize for SSD

In my case, I have drive C: (my Windows 10 system partition) and a “Recovery” partition located on an SSD, the other two partitions (“photos” and “raw_photos”) are using regular hard drives (HDD). Notice the “Optimization not available” 😀 .

Turn off hibernation and suspend ! Having either of them enabled can cause your Windows VM to hang, or may even affect the host. To turn off hibernation and suspend, follow the instructions for hibernation and suspend.

Turn off fast startup ! When you shut down the Windows VM, fast startup leaves the file system in a state that is unmountable by Linux. If something goes wrong, you’re screwed. NEVER EVER let proprietary technology have control over your data. Follow these instructions to turn off fast startup.

By now you should have a working Windows VM with VGA passthrough.

Part 12 – Troubleshooting

Below are a number of common issues when trying to install/run Windows in a VGA passthrough environment.

VM not starting – graphics driver

A common issue is the binding of a driver to the graphics card we want to pass through. As I was writing this how-to and made changes to my (previously working) system, I suddenly couldn’t start the VM anymore. The first thing to check if you don’t get a black Tianocore screen is whether or not the graphics card you try to pass through is bound to the vfio-pci driver:
dmesg | grep -i vfio
The output should be similar to this:
[ 2.735931] VFIO – User Level meta-driver version: 0.3
[ 2.757208] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000
[ 2.773223] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000
[ 8.437128] vfio-pci 0000:02:00.0: enabling device (0000 -> 0003)

The above example shows that the graphics card is bound to the vfio-pci driver (see last line), which is what we want. If the command doesn’t produce any output, or a very different one from above, something is wrong. To check further, enter:
lspci -k | grep -i -A 3 vga
Here is what I got when my VM wouldn’t want to start anymore:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
Subsystem: NVIDIA Corporation GF106GL [Quadro 2000]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361

02:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GM204 [GeForce GTX 970]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361

Graphics card 01:00.0 (Quadro 2000) uses the Nvidia driver – just what I want.

Graphics card 02:00.0 (GTX 970) also uses the Nvidia driver – that is NOT what I was hoping for. This card should be bound to the vfio-pci driver. So what do we do?

In Linux Mint, click the menu button, click “Control Center”, then click “Driver Manager” in the Administration section. Enter your password. You will then see the drivers associated with the graphics cards. Change the driver of the graphics card so it will use the open-source driver (in this example “Nouveau”) and press “Apply Changes”. After the change, it should look similar to the photo below:

Driver Manager
Driver Manager in Linux Mint

If above doesn’t help, or if you can’t get rid of the nouveau driver, see if the nvidia-fallback.service is running. If yes, it will load the open-source nouveau driver whenever it can’t find the Nvidia proprietary driver. You need to disable it by running the following command as sudo:

systemctl disable nvidia-fallback.service

BSOD when installing AMD Crimson drivers under Windows

Several users on the Redhat VFIO mailing list have reported problems with the installation of AMD Crimson drivers under Windows. This seems to affect a number of AMD graphics cards, as well as a number of different AMD Crimson driver releases. A workaround is described here: https://www.redhat.com/archives/vfio-users/2016-April/msg00153.html

In this workaround the following line is added to the startup script, right above the definition of the graphics device:
-device ioh3420,bus=pci,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1

Should the above configuration give you a “Bus ‘pci’ not found” error, change the line as follows:
-device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1

Then you change the graphics card passthrough options as follows:
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \

Identical graphics cards for host and guest

If you use two identical graphics cards for both the Linux host and the Windows guest, follow these instructions:

Modify the /etc/modprobe.d/local.conf file as follows:

install vfio-pci /sbin/vfio-pci-override-vga.sh

Create a /sbin/vfio-pci-override-vga.sh file with the following content:

#!/bin/sh

DEVS="0000:02:00.0 0000:02:00.1"

for DEV in $DEVS; do
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done

modprobe -i vfio-pci

Make the vfio-pci-override-vga.sh file executable:

chmod u+x /sbin/vfio-pci-override-vga.sh

Windows ISO won’t boot – 1

If you can’t start the Windows ISO, it may be necessary to run a more recent version of Qemu to get features or work-arounds that solve problems. If you require a more updated version of Qemu (version 2.12 as of this update), add the following PPA (warning: this is not an official repository – use at your own risk). At the terminal prompt, enter:
sudo add-apt-repository ppa:jacob/virtualisation

Windows ISO won’t boot – 2

Sometimes the OVMF BIOS files from the official Ubuntu repository don’t work with your hardware and the VM won’t boot. In that case you can download alternative OVMF files from here: http://www.ubuntuupdates.org/package/core/wily/multiverse/base/ovmf, or get the most updated version from here:
https://www.kraxel.org/repos/jenkins/edk2/
Download the latest edk2.git-ovmf-x64 file, as of this update it is “edk2.git-ovmf-x64-0-20180807.221.g1aa9314e3a.noarch.rpm” for a 64bit installation. Open the downloaded .rpm file with root privileges and unpack to /.
Copy the following files:
sudo cp /usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd /usr/share/ovmf/OVMF.fd

sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd /usr/share/OVMF/OVMF_CODE.fd

sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /usr/share/OVMF/OVMF_VARS.fd

Check to see if the /usr/share/ovmf/OVMF.fd link exists, if not, create it:
sudo ln -s '/usr/share/ovmf/OVMF.fd' '/usr/share/qemu/OVMF.fd'

Windows ISO won’t boot – 3

Sometimes the Windows ISO image is corrupted or simply an old version that doesn’t work with passthrough. Go to https://www.microsoft.com/en-us/software-download/windows10ISO and download the ISO you need (see your software license). Then try again.

Motherboard BIOS bugs

Some motherboard BIOSes have bugs and prevent passthrough. Use “dmesg” and look for entries like these:
[ 0.297481] [Firmware Bug]: AMD-Vi: IOAPIC[7] not in IVRS table
[ 0.297485] [Firmware Bug]: AMD-Vi: IOAPIC[8] not in IVRS table
[ 0.297487] [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found in IVRS table
[ 0.297490] AMD-Vi: Disabling interrupt remapping due to BIOS Bug(s)
If you find entries that point to a faulty BIOS or problems with interrupt remapping, go to Easy solution to get IOMMU working on mobos with broken BIOSes. (All credits go to leonmaxx on the Ubuntu forum!)

Intel IGD and arbitration bug

For users of Intel CPUs with IGD (Intel graphics device): The Intel i915 driver has a bug, which has necessitated a kernel patch named i915 vga arbiter patch. According to developer Alex Williamson, this patch is needed any time you have host Intel graphics and make use of the x-vga=on option. This tutorial, however, does NOT use the x-vga option; the tutorial is based on UEFI boot and doesn’t use VGA. That means you do NOT need the i915 vga arbiter patch! See http://vfio.blogspot.com/2014/08/primary-graphics-assignment-without-vga.html.

In some case you may need to stop the i915 driver from loading by adding nomodeset to the following line in /etc/default/grub as shown:
GRUB_CMDLINE_LINUX_DEFAULT=”quiet nomodeset intel_iommu=on”

Then run:
sudo update-grub

nomodeset prevents the kernel from loading video drivers and tells it to use BIOS modes instead, until X is loaded.

IOMMU group contains additional devices

When checking the IOMMU groups, your graphics card’ video and audio part should be the only 2 entries under the respective IOMMU group. The same goes for any other PCI device you want to pass through, as you must pass through all devices within an IOMMU group, or nothing. If – aside from the PCI device(s) you wish to pass to the guest – there are other devices within the same IOMMU group, see What if there are other devices in my IOMMU group for a solution.

For more information on IOMMU groups, see here.

Dual-graphics laptops (e.g. Optimus technology)

A quote from developer Alex Williamson:

OVMF is the way to go if you want to avoid patching your kernel, … if your GPU and guest OS support UEFI.

Dual-graphics laptops are tricky. There are no guarantees that any of this will work, but especially custom graphics cards on laptops. The discrete GPU may not be directly connected to any of the outputs, so “remoting” the graphics internally may be the only way to get to the guest desktop. It’s possible that the GPU does not have a discrete ROM, instead hiding it in ACPI or elsewhere to be extracted. Some hybrid graphics laptops require custom drivers from the vendor. The more integration it has into the system, probably the less likely that it will behave like a discrete desktop GPU.

That said, there is light on the horizon. Misairu_G (username on forum) published a Guide to VGA passthrough on Optimus laptops. You may want to consult that guide if you use a laptop with Nvidia graphics.

User bash64 on the Linux Mint forum has reported success with only minor modifications to this tutorial. The main deviations were:

  1. The nomodeset option in the /etc/default/grub file (see “Intel IGD and arbitration bug” above)
  2. Seabios instead of UEFI / ovmf
  3. Minor modifications to the qemu script file

Issues with Skylake CPUs

Another issue has come up with Intel Skylake CPUs. This problem is likely solved by now. Update to a recent kernel (e.g. 4.15), as described above.

In case the kernel upgrade doesn’t solve the issue, see https://lkml.org/lkml/2016/3/31/1112 for an available patch. Another possible solution can be found here: https://teksyndicate.com/2015/09/13/wendells-skylake-pc-build-i7-6700k/.

If you haven’t found a solution to your problem, check the references in part 15. You are also welcome to leave a comment and I or someone else may come to the rescue.

Part 13 – Run Windows VM in user mode (non-root)

To run your Windows VM in user mode has become easy.

  1. Add your user to the kvm group:
    sudo usermod -a -G kvm myusername
    Note: Always replace “myusername” with your user name.
  2. Reboot (or logout and login) to see your user in the kvm group.
  3. If you use hugepages, make sure they are properly configured. Open your /etc/fstab file and compare your hugetlbfs configuration with the following:
    hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0
    Note that the gid=129 might be different for your system. The rest should be identical!Now enter the following command:
    getent group kvm
    This should return something like:
    kvm:x:129:myusername
    The group ID (gid) number matches. If not, edit the fstab file to have the gid= entry match the gid number you got using getent.
  4. Edit the Windows VM start script and add the following line below the entry “cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd“:
    chown myusername:kvm /tmp/my_vars.fd
    Next add the following entry right under the qemu-system-x86_64 \ entry, on a separate line:
    -runas myusername \
    Save the file and start your Windows VM. You will still need sudo to run the script, since it performs some privileged tasks, but the guest will run in user mode with your user privileges.
  5. After booting into Windows, switch to the Linux host and run in a terminal:
    top
    Your output should be similar to this:

    kvm Windows VM
    top shows Windows 10 VM running with user privileges

Notice the win10vm entry associated with my user name “heiko” instead of “root”.

Please report if you encounter problems (use comment section below).

For other references, see the following tutorial: https://www.evonide.com/non-root-gpu-passthrough-setup/.

Part 14 – Passing more PCI devices to guest

If you wish to pass additional PCI devices through to your Windows guest, you must make sure that you pass through all PCI devices residing under the same IOMMU group. Moreover, DO NOT PASS root devices to your guest. To check which PCI devices resider under the same group, use the following command:
find /sys/kernel/iommu_groups/ -type l
The output on my system is:
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.2
/sys/kernel/iommu_groups/4/devices/0000:00:05.4
/sys/kernel/iommu_groups/5/devices/0000:00:11.0
/sys/kernel/iommu_groups/6/devices/0000:00:16.0
/sys/kernel/iommu_groups/7/devices/0000:00:19.0
/sys/kernel/iommu_groups/8/devices/0000:00:1a.0
/sys/kernel/iommu_groups/9/devices/0000:00:1c.0
/sys/kernel/iommu_groups/10/devices/0000:00:1c.1
/sys/kernel/iommu_groups/11/devices/0000:00:1c.2
/sys/kernel/iommu_groups/12/devices/0000:00:1c.3
/sys/kernel/iommu_groups/13/devices/0000:00:1c.4
/sys/kernel/iommu_groups/14/devices/0000:00:1c.7
/sys/kernel/iommu_groups/15/devices/0000:00:1d.0
/sys/kernel/iommu_groups/16/devices/0000:00:1e.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.2
/sys/kernel/iommu_groups/17/devices/0000:00:1f.3
/sys/kernel/iommu_groups/18/devices/0000:01:00.0
/sys/kernel/iommu_groups/18/devices/0000:01:00.1
/sys/kernel/iommu_groups/19/devices/0000:02:00.0
/sys/kernel/iommu_groups/19/devices/0000:02:00.1
/sys/kernel/iommu_groups/20/devices/0000:05:00.0
/sys/kernel/iommu_groups/20/devices/0000:06:04.0

As you can see in the above list, some IOMMU groups contain multiple devices on the PCI bus. I wanted to see which devices are in IOMMU group 17 and used the PCI bus ID:
lspci -nn | grep 00:1f.
Here is what I got:
00:1f.0 ISA bridge [0601]: Intel Corporation C600/X79 series chipset LPC Controller [8086:1d41] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller [8086:1d02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation C600/X79 series chipset SMBus Host Controller [8086:1d22] (rev 05)

All of the listed devices are used by my Linux host:
– The ISA bridge is a standard device used by the host. You do not pass it through to a guest!
– All my drives are controlled by the host, so passing through a SATA controller would be a very bad idea!
– Do NOT pass through a host controller, such as the C600/X79 series chipset SMBus Host Controller!

In order to pass through individual PCI devices, edit the VM startup script and insert the following code underneath the vmname=… line:
configfile=/etc/vfio-pci.cfg

vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

Underneath the line containing “else”, insert:

cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done

You need to create a vfio-pci.cfg file in /etc containing the PCI bus numbers as follows:
0000:00:1a.0
0000:08:00.0
Make sure the file does NOT contain any blank line(s). Replace the PCI IDs with the ones you found!

Part 15 – References

For documentation on qemu/kvm, see the following directory on your Linux machine: /usr/share/doc/qemu-system-common

https://passthroughpo.st/ – a new “online news publication with a razor focus on virtualization and linux gaming, as well as developments in open source technology”

https://www.reddit.com/r/VFIO/ – the Reddit r/VFIO subreddit to discuss all things related to VFIO and gaming on virtual machines in general

https://davidyat.es/2016/09/08/gpu-passthrough/ – a well written tutorial offering qemu script and virt-manager as options

http://mathiashueber.com/amd-ryzen-based-passthrough-setup-between-xubuntu-16-04-and-windows-10/ – nice and detailed tutorial for a Ryzen based system using the Virtual Machine Manager GUI

https://ycnrg.org/vga-passthrough-with-ovmf-vfio/ – a Ubuntu 16.04 tutorial using virt-manager.

https://qemu.weilnetz.de/doc/qemu-doc.html – QEMU user manual

https://bbs.archlinux.org/viewtopic.php?id=162768 – this gave me the inspiration – the best thread on kvm!

http://ubuntuforums.org/showthread.php?t=2266916 – Ubuntu tutorial.

https://wiki.archlinux.org/index.php/QEMU – Arch Linux documentation on QEMU – by far the best.

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF – PCI passthrough via OVMF tutorial for Arch Linux – provides excellent information.

https://aur.archlinux.org/cgit/aur.git/tree/?h=linux-vfio – a source for the ACS and i915 arbiter patches.

http://vfio.blogspot.com/2014/08/vfiovga-faq.html – one of the developers, Alex provides invaluable information and advice.

http://vfio.blogspot.com/2014/08/primary-graphics-assignment-without-vga.html

http://www.linux-kvm.org/page/Tuning_KVM – Redhat is the key developer of kvm, their website has lots of information.

https://wiki.archlinux.org/index.php/KVM – Arch Linux KVM page.

https://www.suse.com/documentation/sles11/book_kvm/data/part_2_book_book_kvm.html – Suse Linux documentation on KVM – good reference.

https://www.evonide.com/non-root-gpu-passthrough-setup/ – haven’t tried it, but looks like a good tutorial.

https://forum.level1techs.com/t/gta-v-on-linux-skylake-build-hardware-vm-passthrough/87440 – tutorial with Youtube video to go along, very useful and up-to-date, including how to apply ACS override patch.

https://gitlab.com/YuriAlek/vfio – single GPU passthrough with QEMU and VFIO.

https://libvirt.org/format.html and https://libvirt.org/formatdomain.html – if you want to play with virt-manager, you’ll need to dabble in libvirt.

Below is the VM startup script I use, for reference only.
Note: The script is specific for my hardware. Don’t use it without modifying it!

#!/bin/bash

configfile=/etc/vfio-pci.cfg
vmname="win10vm"

vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
zenity --info --window-icon=info --timeout=15 --text="A passthrough VM is already running." &
exit 1

else

cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done

# use pulseaudio
#export QEMU_AUDIO_DRV=pa
#export QEMU_PA_SAMPLES=8192
#export QEMU_AUDIO_TIMER_PERIOD=100
#export QEMU_PA_SERVER=/run/user/1000/pulse/native
#export QEMU_PA_SINK=alsa_output.pci-0000_06_04.0.analog-stereo
#export QEMU_PA_SOURCE=input

#use ALSA
export QEMU_AUDIO_DRV=alsa
export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256
export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256
export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16
export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1
export QEMU_AUDIO_TIMER_PERIOD=50

cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
chown heiko:kvm /tmp/my_vars.fd

#taskset -c 0-9
qemu-system-x86_64 \
-runas heiko \
-monitor stdio \
-serial none \
-parallel none \
-nodefaults \
-nodefconfig \
-name $vmname,process=$vmname \
-machine q35,accel=kvm,kernel_irqchip=on \
-cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
-smp 12,sockets=1,cores=6,threads=2 \
-m 16G \
-mem-path /dev/hugepages \
-mem-prealloc \
-balloon none \
-rtc base=localtime,clock=host \
-vga none \
-nographic \
-soundhw hda \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-device vfio-pci,host=00:1a.0 \
-device vfio-pci,host=08:00.0 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=c \
-drive id=disk0,if=virtio,cache=none,format=raw,aio=native,discard=unmap,detect-zeroes=unmap,file=/dev/mapper/lm13-win10 \
-drive id=disk1,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/photos-photo_stripe \
-drive id=disk2,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/media-photo_raw \
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

#EOF

#-drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/virtio-win-0.1.140.iso,index=3,media=cdrom \
#-drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/Win10_1803_English_x64.iso,index=4,media=cdrom \

#-global ide-drive.physical_block_size=4096 \
#-global ide-drive.logical_block_size=4096 \
#-global virtio-blk-pci.physical_block_size=512 \
#-global virtio-blk-pci.logical_block_size=512 \

exit 0
fi

 

The command
taskset -c 0-9 qemu-system-x86_64...

pins the vCPUs of the guest to processor threads 0-9 (I have a 6-core CPU with 2 threads per core=12 threads). Here I assign 10 out of 12 threads to the guest. While the guest is running, the host will have to make due with only 1 core (2 threads). CPU pinning may improve performance of the guest.

Note: I am currently passing through all cores and threads, without CPU pinning. This seems to give me the best results in the benchmarks, as well as real-life performance.

Part 16 – Related Posts

Here a list of related posts:

Developments in Virtualization

Virtual Machines on Userbenchmark

qemu-system-x86_64 Drive Options

Part 17 – Benchmarks

I have a separate post showing Passmark benchmarks of my system.

Here the UserBenchmark results for my configuration:

UserBenchmarks: Game 60%, Desk 71%, Work 64%
CPU: Intel Core i7-3930K – 79.7%
GPU: Nvidia GTX 970 – 60.4%
SSD: Red Hat VirtIO 140GB – 74.6%
HDD: Red Hat VirtIO 2TB – 64.6%
HDD: Red Hat VirtIO 2TB – 66.1%
RAM: QEMU 20GB – 98.2%
MBD: QEMU Standard PC (Q35 + ICH9, 2009)

Part 18 – Performance Tuning

I keep updating this chapter, so expect more tips to be added here in the future.

Enable Hyper-V Enlightenments

As funny as this sounds, this is another way to improve Windows performance under kvm. Hyper-V enlightenments are easy to implement: In the script that starts the VM, change the following line:
-cpu host,kvm=off \

to:
-cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \

The above is one line! To check that it actually works, start your Windows VM and switch to Linux. Open a terminal window and enter (in one line):
ps -aux | grep qemu | grep -Eo 'hv_relaxed|hv_spinlocks=0x1fff|hv_vapic|hv_time'

You should get the following output:
hv_vapic
hv_time
hv_relaxed
hv_spinlocks=0x1fff

For more on Hyper-V enlightenments, see here.

Configure hugepages

This step is not required to run the Windows VM, but helps improve performance. First we need to decide how much memory we want to give to Windows. Here my suggestions:

  1. No less than 4GB. Use 8GB or more for a Windows gaming VM.
  2. If you got 16GB total and aren’t running multiple VMs, give Windows 8GB-12GB, depending on what you plan to do with Windows.

For this tutorial I use 8GB. Hugepages are enabled by default in the latest releases of Linux Mint (since 18) and Ubuntu (since 16.04). For more information or if you are running an older release, see KVM – Using Hugepages.

Let’s see what we got:
hugeadm --explain

Total System Memory: 24108 MB

Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M

Huge page pools:
Size Minimum Current Maximum Default
2097152 0 0 0 *
1073741824 0 0 0

As you can see, hugepages are mounted to /dev/hugepages, and the default hugepage size is 2097152 Bytes/(1024*1024)=2MB.

Another way to get information about hugepages:
grep "Huge" /proc/meminfo

AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Here some math:
We want to reserve 8GB for Windows:
8GB = 8x1024MB = 8192MB

Our hugepage size is 2MB, so we need to reserve:
8192MB/2MB = 4096 hugepages

We configure the hugepage pool during boot. Open the file /etc/default/grub as root:
xed admin:///etc/default/grub

Look for the GRUB_CMDLINE_LINUX_DEFAULT=”…” line we edited before and add:
hugepages=4096

This is what I have:
GRUB_CMDLINE_LINUX_DEFAULT=”modprobe.blacklist=nouveau quiet intel_iommu=on hugepages=4096″

Save and close. Then run:
sudo update-grub

Now reboot for our hugepages configuration to take effect.

After the reboot, run in a terminal:
hugeadm --explain

Total System Memory: 24108 MB

Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M

Huge page pools:
Size Minimum Current Maximum Default
2097152 4096 4096 4096 *
1073741824 0 0 0

Huge page sizes with configured pools:
2097152

The /proc/sys/vm/min_free_kbytes of 67584 is too small. To maximiuse efficiency of fragmentation avoidance, there should be at least one huge page free per zone in the system which minimally requires a min_free_kbytes value of 112640

A /proc/sys/kernel/shmmax value of 17179869184 bytes may be sub-optimal. To maximise shared memory usage, this should be set to the size of the largest shared memory segment size you want to be able to use. Alternatively, set it to a size matching the maximum possible allocation size of all huge pages. This can be done automatically, using the –set-recommended-shmmax option.

The recommended shmmax for your currently allocated huge pages is 8589934592 bytes.
To make shmmax settings persistent, add the following line to /etc/sysctl.conf:
kernel.shmmax = 8589934592

To make your hugetlb_shm_group settings persistent, add the following line to /etc/sysctl.conf:
vm.hugetlb_shm_group = 129

Note: Permanent swap space should be preferred when dynamic huge page pools are used.

Note the sub-optimal shmmax value. We fix it permanently by editing /etc/sysctl.conf:
xed admin:///etc/sysctl.conf

and adding the following lines:
kernel.shmmax = 8589934592
vm.hugetlb_shm_group = 129
min_free_kbytes = 112640

Note 1: Use the values recommended by hugeadm –explain!!!

Regarding vm.hugetlb_shm_group = 129: “129” is the GID of the group kvm. Check with:
getent group kvm

Run sudo sysctl -p to put the new settings into effect. Then edit the /etc/fstab file to configure the hugepages mount point with permissions  and group ID (GID):
xed admin:///etc/fstab

Add the following line to the end of the file and save:
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0

It’s best to add your user to the kvm group (GID 129), so you’ll have permissions to access the hugepages:
usermod -a -G kvm user
where “user” is your user name.

Check the results with hugeadm --explain.

Now we need to edit the windows10vm.sh script file that contains the qemu command and add the following lines under -m 8G \:
-mem-path /dev/hugepages \
-mem-prealloc \

Reboot the PC for the fstab changes to take effect.

Turn on MSI Message Signaled Interrupts in your VM

Developer Alex Williamson argues that MSI Message Signaled Interrupts may provide a more efficient way to handle interrupts. A detailed description on how to turn on MSI in a Windows VM can be found here: Line-Based vs. Message Signaled-Based Interrupts.

Make sure to backup your entire Windows installation, or at least define a restore point for Windows.

In my case it improved sound quality (no more crackle), others have reported similar results – see these comments.

Low 2D Graphics Performance

Recent Windows updates have added protection against the Spectre vulnerability, by means of an Intel microcode update. This update has caused a significantdrop in 2D graphics performance under Windows.

The reason for the performance drop might be a bug in kvm/qemu, and the work-around described in my separate post should only be used in emergencies, if and when 2D graphics performance is essential to the applications you run.

SR-IOV and IOMMU Pass Through

Some devices support a feature called SR-IOV or Single Root Input/Output Virtualisation. This allows multiple virtual machines to access PCIe hardware  using virtual functions (vf), thus improving performance. See Understanding the iommu Linux grub File Configuration.

The SR-IOV feature needs to be enabled in the BIOS, as well as the drivers. See here for an example.

In some cases performance can be further improved by adding the “pass through” option iommu=pt to the /etc/default/grub file:

GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on iommu=pt”

Followed by sudo update-grub

CPU pinning

Many modern CPUs offer hardware multitasking, known as “hyper-threading” in Intel jargon or “SMT” in AMD talk. Those CPUs run two threads on each core, switching between the threads in an efficient way. It’s almost like having twice the number of cores, but not quite so. Each core can only process one thread at a time. Hyper-threading still helps CPU performance because our PC needs to run multiple tasks simultaneously, and task-switching goes faster with hardware support.

Some tasks, however, can be negatively affected by this switching back and forth. One example are high-speed input-output (IO) tasks. Linux enables us to dedicate (“pin”) a core to such tasks, so that the task won’t have to share CPU resources with other tasks.

To discover the CPU topology on your PC, use the following command:

lscpu -e

The output on my Intel i7 3930K CPU is this:

CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ
0 0 0 0 0:0:0:0 yes 5700.0000 1200.0000
1 0 0 1 1:1:1:0 yes 5700.0000 1200.0000
2 0 0 2 2:2:2:0 yes 5700.0000 1200.0000
3 0 0 3 3:3:3:0 yes 5700.0000 1200.0000
4 0 0 4 4:4:4:0 yes 5700.0000 1200.0000
5 0 0 5 5:5:5:0 yes 5700.0000 1200.0000
6 0 0 0 0:0:0:0 yes 5700.0000 1200.0000
7 0 0 1 1:1:1:0 yes 5700.0000 1200.0000
8 0 0 2 2:2:2:0 yes 5700.0000 1200.0000
9 0 0 3 3:3:3:0 yes 5700.0000 1200.0000
10 0 0 4 4:4:4:0 yes 5700.0000 1200.0000
11 0 0 5 5:5:5:0 yes 5700.0000 1200.0000

*Note: My CPU is overclocked, hence the high MAXMHZ value.

Note column 4 “Core”: It shows which core the vCPU (column 1 – CPU) is actually using. With this Intel processor, vCPU 0 and 6 are sharing core 0.

The performance gain (or loss) of CPU pinning depends on your hardware and on what you are doing with the Windows VM. A good benchmark of different tasks can be found here: CPU Pinning Benchmarks.

Some users report that CPU pinning helped improve latency, but sometimes at the cost of performance. Here is another useful post: Best pinning strategy for latency / performance trade-off.

A good explanation on CPU pinning and other performance improvements can be found here: Performance tuning.

On my PC I do NOT use CPU pinning. It is tricky at best, and whatever I tried it did not improve but rather reduce performance. Important: The effects of CPU pinning are highly individual and depend on what you want to achieve. For a gaming VM, it might help improve performance (see CPU Pinning Benchmarks).

Help Support this Website

If you find this information helpful, consider a contribution.

 

53 thoughts on “Running Windows 10 on Linux using KVM with VGA Passthrough”

  1. Hello Mathias,
    I have been traveling a lot lately, so forgive my late reply. In answer to your question: I don’t use a virsh windows10.xml file, just the script and qemu.
    Most other tutorials use libvirt and virt-manager to create and manage the VM, but I had not much luck with it.

    Heiko

  2. Thanks for the write-up! Still struggling through it on a Metabox (Clevo) P950ER

    I have Ubuntu 18.04 and have spent far too much time trying to disable the nouveau driver from loading.

    – kernel parameters tried:
    — modprobe.blacklist=nouveau
    — nouveau.blacklist=1
    – modprobe.d/* lines:
    — blacklist nouveau
    — options nouveau modeset=0
    — install nouveau /bin/true
    –alias nouveau off

    After all that I found there was a “nvidia-fallback.service” which tries to load nouveau if nvidia isn’t.
    One-off command to disable it:

    # systemctl disable nvidia-fallback.service

    cheers,
    Woody

  3. Hi guys,

    I just attempted this cool project with:
    CPU: intel core i5-6600k
    MEM: 16GB DDR4
    MB: AsRock Z170 Pro4
    Vid Primary: intel IGP
    Vid VM: EVGA Geforce GTX 1060
    HD’s: A couple spinning drives kicking around
    OS: Linux Mint 19

    I was able to get the script fully functional because I want to use an entire drive instead of an image, but for some reason using /dev/sdb1 wouldn’t work no matter how it was formatted, so I adapted your tutorial to libvirt and virt-manager and was able to get Windows installed with networking and a separate keyboard/mouse transferring over. The only issue I am having is that my graphics card is throwing up a Code 43 error in windows, but I’m not sure what to look for to solve the issue and get the Nvidia drivers to actually work. Any help you can give would be greatly appreciated!

    Thanks,

    Clayton

  4. Ahh this seems the right place, I think my previous post was in the wrong place. So I got windows installed in my VM. The issue I am currently having is i can get EITHER my Keyboard OR Mouse to passthru, but not both. Both devices are on Bus 002 and everytime I issue the -device command it seems to overwrite the previous -device command. How can I get them both to work at the same time? As always ANY response is greatly appreciated!

  5. @Brian:
    Make sure to follow part 3 and 4 by the letter. First identify the USB IDs of the keyboard and the mouse. They should be something similar to 045e:076c (the first part 045e is the vendor code, the second 076c is the device code). For two devices you need two such entries.
    You need to edit or create /etc/modprobe.d/local.conf and add something like:
    options vfio-pci ids=10de:13c2,10de:0fbb

    Change the IDs to reflect your own vendor and product IDs.

    Another way to pass through mouse and keyboard is by passing through a USB host controller. This is a PCI passthrough using the -device option in the qemu command. First you need to determine the USB controller:
    lspci | grep USB
    My output is:
    00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #2 (rev 05)
    00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #1 (rev 05)
    07:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    08:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    09:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    0f:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)

    To use the USB controller at PCI bus 08:00.0, you need to do the following:
    1. Edit the start script and add underneath the vmname entry:
    configfile=/etc/vfio-pci.cfg

    vfiobind() {
    dev=”$1″
    vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
    device=$(cat /sys/bus/pci/devices/$dev/device)
    if [ -e /sys/bus/pci/devices/$dev/driver ]; then
    echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
    fi
    echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
    }

    if ps -A | grep -q $vmname; then
    echo “$vmname is already running.”
    exit 1
    else
    cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
    vfiobind $line
    done

    In addition, you need to add the following line to the qemu command in the script (just underneath the graphics card device):
    -device vfio-pci,host=08:00.0 \

    Save the script file.
    Then you need to create the /etc/vfio-pci.cfg and enter the following:
    0000:08:00.0

    There must not be any blank line, or any comment, just one single line specifying the PCI bus of the USB controller.

    The USB controller must reside within its own IOMMU group. To check, use the following command:
    for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort –version-sort

    Hope this helps.

  6. I have been able to get Win10 guest working well on LM19 host, but not without some struggle…particularly with keyboard and mouse passthrough. After spending some time with the qemu 2.11.1 manpages (and Google), I came up with a solution which works well, doesn’t require explicit bus mapping, and allows me to switch back and forth between guest and host using normal KVM window release (CTRL+ALT).

    My setup is single monitor – I use CPU graphics for host, and passthrough my Nvidia card to guest – both are wired to my monitor, and once I have launched my VM and clicked inside the window to initiate mouse and kbd capture, I can then switch inputs on my monitor to run Win10 fullscreen from the discreet card output. This would of course work equally well on multiple monitor setup, of course. Here is how I got it working –

    As you noted in your post, QEMU syntax for USB device passthrough has changed since your original posting. The -device parameter is now used to define USB device passthrough, and I use the following syntax in my script file:

    -usb \
    -device usb-kbd \
    -device usb-mouse \

    I believe that code was added to QEMU which allows for keyboard and mouse detection without having to explicitly map the bus and device strings.

    Using these keyboard and mouse passthrough options and by removing the -vga none and -nographic parameters, I am able to switch back and forth between VM and host without a KVM/USB switch…just have to change the input on my monitor.

  7. I’m still struggling with my laptop (Metabox (Clevo) P950ER) to pass through the nvidia GPU. I think the bios is in the system bios so I’ll need to do custom ACPI table things to make it work which I’m not sure anybody has succeeded with a windows guest.
    There is another direction possible for those in my position: most of the intel CPUs with built in GPUs have virtualization features which can time slice the GPU( this is called GVM-g) and the parts necessary are in kernel 4.16 and qemu 2.12 https://www.phoronix.com/scan.php?page=news_item&px=Intel-vGPU-Linux-4.16-QEMU-2.12 which is what I’ll be trying next…

  8. @Chris:
    -usb \
    -device usb-kbd \
    -device usb-mouse \

    Thanks for your comment. I’ve actually tried your approach and it failed for me (using a Logitech wireless multi-device mouse and keyboard). However, with a regular mouse and keyboard your script should work just fine.

  9. @Pipandwoody:

    You most likely have a laptop using Nvidia “Optimus” technology. This can be tricky, though some people actually succeeded! Follow the links found here:
    https://heiko-sieger.info/running-windows-10-on-linux-using-kvm-with-vga-passthrough/#Dual-graphics_laptops_eg_Optimus_technology

    EDIT: If I remember correctly, you need to use the Seabios method, not the UEFI boot method I describe in this tutorial. There are instructions on how to do that in the links I provided.

    The vGPU support is indeed good news. HOWEVER, why would you choose the low-performance Intel IGD (integrated graphics device) inside the CPU over the more powerful discrete (Nvidia) GPU? Note that this vGPU technology only works with Intel now, not Nvidia. Nvidia has it’s own vGPU technology for its professional line of multi-user graphics cards (see https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html). But that has nothing to do with your laptop GPUs.

    Moreover, you would need to upgrade both your kernel (the current Linux Mint 19 / Ubuntu 18.04 kernel is 4.15) and qemu (2.11 as of now).

    Unless you have a good reason to try vGPU, I would try the other options I mentioned.

    1. I’m back on board now; in trying to get the Intel GVT-g stuff going figured out that I have a Coffee/Whisky/Cannon Lake processor which is not supported by the Intel linux driver and is unlikely to be in the future…

    2. Pipandwoody: “I’m back on board now; in trying to get the Intel GVT-g stuff going figured out that I have a Coffee/Whisky/Cannon Lake processor which is not supported by the Intel linux driver and is unlikely to be in the future…”

      That’s a bummer.

  10. Benchmark results – Win10/8GB/SSD/GTX1070/6770K – no overclock

    UserBenchmarks:
    CPU: Intel Core i7-6700K – 98.6%
    GPU: Nvidia GTX 1070 – 103.4%
    SSD: Red Hat VirtIO 161GB – 147.8%
    RAM: QEMU 1x8GB – 97.8%
    MBD: QEMU Standard PC (Q35 + ICH9, 2009)

    Passmark:
    Rating -5360.2
    CPU-12140.1
    G2D-705.0
    G3D-12496.3
    Mem-2994.6
    Disk-10499.7

    Overall I am pleased with the performance…just need to eliminate the sound crackling issue and can call it done.

    1. That’s great news! Performance looks good, too.

      Regarding sound crackling: I use the following in my script:

      export QEMU_AUDIO_DRV=alsa
      export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256
      export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256
      export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
      export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16
      export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1
      export QEMU_AUDIO_TIMER_PERIOD=50

      Then in the qemu section:
      -soundhw hda \

      See also my tuning section https://heiko-sieger.info/running-windows-10-on-linux-using-kvm-with-vga-passthrough/#Turn_on_MSI_Message_Signaled_Interrupts_in_your_VM
      and turn on MSI in the Windows guest. I think that should do it.

  11. Please note I updated my start script to improve the VM detection. The new code is:
    if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
    echo “VM running”
    else
    echo “Start the VM”

    fi

    I wanted to see if ANY passthrough VM is running, as I have now two different passthrough VMs (Windows 10 and Linux Mint 19) that obviously cannot run at the same time.

  12. I followed instructions for creating a network bridge by adding br0 to the /network/interfaces file if you know what I mean. When I get to creating the VM with your script should I be changing net0 to br0? Wasn’t quite sure how that works. to set that line up in your script for the network.

    1. @David A:

      My command line is:
      -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
      -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

      Here is the output of brctl show, with no VM running:
      brctl show
      bridge name bridge id STP enabled interfaces
      bridge0 8000.c860003147df yes eno1
      virbr0 8000.52540004e1c8 yes virbr0-nic

      As you can see, you don’t use the name of the bridge you created in the qemu script.

  13. Also extremely extremely important problem in the guide!!!!

    Windows will then ask you to:
    Select the driver to install

    Click “Browse”, then select your VFIO ISO image and go to “vioscsi“, open and select your Windows version (w10 for Windows 1o), then select the “AMD64” version for 64 bit systems, click OK.

    Vioscsi should be viostor….

    When navigating the iso the correct driver is in the viostor folder

    1. I fixed the tutorial.

      In practice, I haven’t found any significant performance difference between the two drivers. And I still need to get the hang of the new qemu syntax, but haven’t got the time to play around with it.

  14. Hey there,

    Sadly I was not yet able to get a vm to work. I was able to follow along until part 10, but there the trouble started.

    The “-cpu host,kvm=off \” part gets me “QEMU 2.11.1 monitor – type ‘help’ for more information” in the Terminal.
    Without the -cpu part, a QEMU windows opens; with “Boot failed: could not read from CDROM (code 0003)” as an error. This happens with booth a Windows 10 and ubuntu 18.04 iso file.
    After closing the QEMU window the following messages appear in the terminal:
    “-smp: command not found”
    “-device: command not found” which is my GTX 970.

    I hope you can point me in the right direction. I really want to make this work.

    Regards SaBe3.1415

    The script:
    #!/bin/bash

    vmname=”windows10vm”

    if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
    echo “A passthrough VM is already running.” &
    exit 1

    else

    # use pulseaudio
    export QEMU_AUDIO_DRV=pa
    export QEMU_PA_SAMPLES=8192
    export QEMU_AUDIO_TIMER_PERIOD=99
    export QEMU_PA_SERVER=/run/user/1000/pulse/native

    cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
    qemu-system-x86_64 \
    -name $vmname,process=$vmname \
    -machine type=q35,accel=kvm \
    -cpu host,kvm=off \
    -smp 6,sockets=1,cores=3,threads=2 \
    -m 8G \
    -balloon none \
    -rtc clock=host,base=localtime \
    -vga none \
    -nographic \
    -serial none \
    -parallel none \
    -soundhw hda \
    #-usb-host,vendorid=1b1c,productid=1b2d \
    #-usb-host,hostbus=001,hostaddr=003 \
    #-usb-host,vendorid=214e,productid=0005 \
    #-usb-host,hostbus=001,hostaddr=005 \
    -device vfio-pci,host=01:00.0,multifunction=on \
    -device vfio-pci,host=01:00.1 \
    -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
    -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
    -boot order=dc \
    -drive id=disk0,if=virtio,cache=none,format=raw,file=/home/toru/Downloads/Win.img \
    -drive file=/home/toru/Downloads/ubuntu-18.04.1-desktop-amd64.iso,index=1,media=cdrom \
    -drive file=/home/toru/Downloads/virtio-win-0.1.160.iso,index=2,media=cdrom \
    -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
    -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

    exit 0
    fi

    Hardware:
    Motherboard: ASUS ROG MAXIMUS VIII HERO
    OS: Linux Mint 19 Cinnamon version 3.8.9
    Linux Kernel: 4.15.0-34-generic
    CPU: Intel i7 6700k
    RAM: 16GB
    GPU for the vm: GeForce GTX 970
    GPU for Mint : GeForce 210

    1. Remove the #-usb-host,vendorid=1b1c,productid=1b2d lines from your qemu command. If I remember correctly the qemu command doesn’t allow comments within the command string.

      You cannot use the virtio Windows drivers for Ubuntu.

      It happened to me too that I got the Qemu monitor and needed to continue by selecting the image to boot from. I’m traveling and cannot check the commands or sequence, but I hope this gets you further. Make sure your GTX970 has a UEFI BIOS and that it is within its own IOMMU group. There is a separate post on that.

  15. Heiko, thank your for an amazing guide.
    I have some troubles with the VGA configuration, an on-board Intel VGA chip and a PCI MSI Geforce adapter. I dont have the exact specs here right now, so I wonder if you maybe can help me with troubleshooting once I am back home with the computer in front of me?
    Best regards
    Anders/Sweden

    1. Yeah, I would need some more info. As an advice, start with checking that your graphics card has an UEFI BIOS (if its an older one, it might not). Then you need to make sure that your PC boots using the internal graphics – check your PC BIOS settings.

    1. Your Nvidia GPU is in iommu group 1:
      /sys/kernel/iommu_groups/1/devices/0000:01:00.1
      /sys/kernel/iommu_groups/1/devices/0000:00:01.0
      /sys/kernel/iommu_groups/1/devices/0000:01:00.0

      You need to pass through ALL devices except the PCI bridge (00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller), that is you need to attach both 0000:01:00.0 and 0000:01:00.1 to vfio-pci. Your output shows that you passed only the graphics part 01:00.0 through, not the audio part of your graphics card (01:00.1):
      root@ws-mint:~# lspci -kn|grep -A 2 01:00

      01:00.0 0300: 10de:1184 (rev a1)
      Subsystem: 1462:2825
      Kernel driver in use: vfio-pci

      01:00.1 0403: 10de:0e0a (rev a1)
      Subsystem: 1462:2825
      Kernel driver in use: snd_hda_intel

      The Kernel driver in use should be vfio-pci. See step 4 in my tutorial:

      Open or create /etc/modprobe.d/local.conf:
      xed admin:///etc/modprobe.d/local.conf
      and insert the following:
      options vfio-pci ids=10de:1184,10de:0e0a
      These are the PCI IDs for your graphics card’ VGA and audio part.

      Reboot and check with with: lspci -kn | grep -A 2 01:00

      In your start script, replace:
      -usb \
      -device usb-host,hostbus=1,hostaddr=1 \

      with:
      -usb -usb-host,vendorid=062a,productid=4101 \
      and remove the network configuration (-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01)! Qemu will provide a routed network interface and you should have Internet access from the VM.

      In the script, replace the graphics card passthrough part with the following:
      -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
      -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
      -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \

      Please share your /etc/default/grub configuration file!

      Your /etc/default/grub file should contain a line similar to this:
      GRUB_CMDLINE_LINUX_DEFAULT="modprobe.blacklist=nouveau intel_iommu=on iommu=pt"
      iommu=pt is optional.

      If you change /etc/default/grub you need to run sudo update-grub

      Your complete qemu command should be:
      qemu-system-x86_64 \
      -name $vmname,process=$vmname \
      -machine type=q35,accel=kvm \
      -cpu host,kvm=off \
      -smp 1,sockets=1,cores=1,threads=1 \
      -m 8G \
      -balloon none \
      -rtc clock=host,base=localtime \
      -vga none \
      -nographic \
      -serial none \
      -parallel none \
      -soundhw hda \
      -usb -usb-host,vendorid=062a,productid=4101 \
      -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
      -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
      -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \
      -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
      -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
      -boot order=dc \
      -drive id=disk0,if=virtio,cache=none,format=raw,file=/data/VM/win10direct.img \
      -drive file=/data/ISO/Windows-10-Pro-x64.iso,index=1,media=cdrom \
      -drive file=/data/VM/virtio-win-0.1.160.iso,index=2,media=cdrom

      Important: Check my qemu command – I do make mistakes sometimes!

      Note: The -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 command creates a virtual PCIe bridge to which the graphics cards video and audio part are attached. Sometimes this solves issues with black screen or AMD graphics cards / drivers. Hope this helps.

      Note 2: Giving only 1 thread to the VM is a little low. Try -smp 2,sockets=1,cores=1,threads=2

  16. First, the USB parameters seems to be a problem

    anders@ws-mint:/data/VM$ sudo ./runwin10direct.sh
    [sudo] password for anders:
    qemu-system-x86_64: -usb-host,vendorid=062a,productid=4101: invalid option

    Extract from the run script
    -parallel none \
    -soundhw hda \
    -usb -usb-host,vendorid=062a,productid=4101 \
    -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
    -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \

    So after some googling, I changed the runscript to

    -usb \
    -device usb-host,vendorid=0x062a,productid=0x4101 \

    the script start to run, and it seemed to grab both the keyboard and mouse but the second screen remained blank. Nothing displayed at all.

    The grub file looks like this now

    GRUB_DEFAULT=0
    GRUB_TIMEOUT_STYLE=hidden
    GRUB_TIMEOUT=10
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT=”splash intel_iommu=on modprobe.blacklist=nouveau iommu=pt”
    GRUB_CMDLINE_LINUX=””

    anders@ws-mint:~$ lspci -kn | grep -A 2 01.00
    01:00.0 0300: 10de:1184 (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    01:00.1 0403: 10de:0e0a (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    So, we are getting closer, but there is obviously something more to tune. Thank you for your patience, you are a rock!

  17. Heiko, here is the latest status

    First, the USB parameters seems to be a problem

    anders@ws-mint:/data/VM$ sudo ./runwin10direct.sh
    [sudo] password for anders:
    qemu-system-x86_64: -usb-host,vendorid=062a,productid=4101: invalid option

    Extract from the run script
    -parallel none \
    -soundhw hda \
    -usb -usb-host,vendorid=062a,productid=4101 \
    -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
    -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \

    So after some googling, I changed the runscript to

    -usb \
    -device usb-host,vendorid=0x062a,productid=0x4101 \

    the script start to run, and it seemed to grab both the keyboard and mouse but the second screen remained blank. Nothing displayed at all.

    The grub file looks like this now

    GRUB_DEFAULT=0
    GRUB_TIMEOUT_STYLE=hidden
    GRUB_TIMEOUT=10
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT=”splash intel_iommu=on modprobe.blacklist=nouveau iommu=pt”
    GRUB_CMDLINE_LINUX=””

    anders@ws-mint:~$ lspci -kn | grep -A 2 01.00
    01:00.0 0300: 10de:1184 (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    01:00.1 0403: 10de:0e0a (rev a1)
    Subsystem: 1462:2825
    Kernel driver in use: vfio-pci

    So, we are getting closer, but there is obviously something more to tune. Thank you for your patience, you are a rock!

    1. OK, so the the GPU (video and audio) is now bound to vfio-pci. USB mouse/keyboard seems also fixed (thanks for the correction). The problem could be any of the following:
      1. Wrong path to OVMF file or variables – check my tutorial and make sure you followed it by the letter.

      2. Check that the path to the Windows ISO is correct, as well as that to the driver ISO.
      3. Make sure your second monitor is connected properly to the Nvidia card! If the monitor has different inputs, make sure to select the right input.
      3. Remove -vga none \ from the script. This can be helpful for debugging.

  18. Hi
    OVMF files and path checked. They exists and are accessible.
    Windows ISO checked (used them for other VM’s with KVM w/o any issues)
    Same with the virtio io image.

    The monitor is connected to the HDMI port. THere are also Displayport ports on the adapter but not used. I did remove the “blacklist” option and rebooted. By doing so I had a dual screen setup that worked fine, so the cable, monitor and port should be fine.

    The installation seems to run along (see screenshot in the Google Doc file), but without any output at all it may be hard to pin down the exact problem. Is there a logfile option that could be used? I cant find anything in the official QEMU documetnation.

  19. I added a second keyboard and changed the USB vendor and product ids on order to keep the main keyboard attached to the host.
    I then ran an strace -P pid on the running vm, and got the following output (dont know if this is of any use, but maybe there is something there that can help). I then used the main keyboard to abort with ctrl/ (chasing shadows 🙂 )

    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=997967}, NULL, 8) = 0 (Timeout)
    futex(0x55e9c47cafc0, FUTEX_WAKE_PRIVATE, 1) = 1
    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=998196}, NULL, 8) = 0 (Timeout)
    futex(0x55e9c47cafc0, FUTEX_WAKE_PRIVATE, 1) = 1
    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=998109}, NULL, 8) = 0 (Timeout)
    futex(0x55e9c47cafc0, FUTEX_WAKE_PRIVATE, 1) = 1
    ppoll([{fd=0, events=POLLIN}, {fd=3, events=POLLIN}, {fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=30, events=POLLIN}, {fd=32, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}, {fd=46, events=POLLOUT}, {fd=47, events=POLLIN}, {fd=48, events=POLLIN}], 14, {tv_sec=0, tv_nsec=997856}, NULL, 8) = ? ERESTARTNOHAND (To be restarted if no handler)
    — SIGINT {si_signo=SIGINT, si_code=SI_KERNEL} —
    write(7, “\1\0\0\0\0\0\0\0”, 8) = 8
    rt_sigreturn({mask=[BUS USR1 ALRM IO]}) = -1 EINTR (Interrupted system call)

    1. Sorry, the trace doesn’t help (me).

      In order to verify the qemu script I posted above, I installed a new Windows VM using the following script:
      #!/bin/bash

      configfile=/etc/vfio-pci.cfg
      vmname="win10-2vm"

      vfiobind() {
      dev="$1"
      vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
      device=$(cat /sys/bus/pci/devices/$dev/device)
      if [ -e /sys/bus/pci/devices/$dev/driver ]; then
      echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
      fi
      echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
      }

      if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
      zenity --info --window-icon=info --timeout=15 --text="A VM is already running." &
      exit 1

      else

      cat $configfile | while read line;do
      echo $line | grep ^# >/dev/null 2>&1 && continue
      vfiobind $line
      done

      export QEMU_AUDIO_DRV=alsa
      export QEMU_ALSA_ADC_BUFFER_SIZE=1024 QEMU_ALSA_ADC_PERIOD_SIZE=256
      export QEMU_ALSA_DAC_BUFFER_SIZE=1024 QEMU_ALSA_DAC_PERIOD_SIZE=256
      export QEMU_AUDIO_DAC_FIXED_SETTINGS=1
      export QEMU_AUDIO_DAC_FIXED_FREQ=44100 QEMU_AUDIO_DAC_FIXED_FMT=S16 QEMU_AUDIO_ADC_FIXED_FREQ=44100 QEMU_AUDIO_ADC_FIXED_FMT=S16
      export QEMU_AUDIO_DAC_TRY_POLL=1 QEMU_AUDIO_ADC_TRY_POLL=1
      export QEMU_AUDIO_TIMER_PERIOD=50

      cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
      chown user:kvm /tmp/my_vars.fd

      qemu-system-x86_64 \
      -enable-kvm \
      -runas user \
      -monitor stdio \
      -serial none \
      -parallel none \
      -nodefaults \
      -nodefconfig \
      -name $vmname,process=$vmname \
      -machine q35,accel=kvm,kernel_irqchip=on \
      -cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
      -smp 6,sockets=1,cores=3,threads=2 \
      -m 8G \
      -balloon none \
      -rtc base=localtime,clock=host \
      -soundhw hda \
      -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
      -object iothread,id=io1 \
      -device virtio-scsi-pci,id=scsi0,ioeventfd=on,iothread=io1,num_queues=4,bus=pcie.0 \
      -drive id=disk0,file=/dev/lm13/win10-2,if=none,format=raw,aio=native,cache=none,cache.direct=on,discard=unmap,detect-zeroes=unmap \
      -device scsi-hd,drive=disk0 \
      -drive id=isocd,file=/media/heiko/tmp_stripe/OS-backup/ISOs/win10.iso,format=raw,if=none -device scsi-cd,drive=isocd \
      -drive id=virtiocd,file=/media/heiko/tmp_stripe/OS-backup/ISOs/virtio.iso,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd \
      -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on \
      -device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
      -device vfio-pci,host=00:1a.0,bus=root.1,addr=00.2 \
      -device vfio-pci,host=08:00.0,bus=root.1,addr=00.3 \
      -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
      -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
      -boot order=dc \
      -netdev tap,vhost=on,ifname=vmtap1,id=net0 \
      -device virtio-net-pci,mac=00:16:3e:00:01:01,netdev=net0

      #EOF
      exit 0
      fi

      I got an error when using the x-vga option for the graphics card, so I removed it.

      When starting the script a prompt to press Enter to continue CD/DVD installation appeared quickly, followed by PXE… prompt. The first time around I missed the chance to press Enter, so I had to kill the VM. The second time I just hit repeatedly Enter and I got the installation prompt.

  20. I removed the -vga none option, no difference. Removed the -nographic option too and then a new windows appeared on the primary screen with PXE boot and a bios shell. Could not work out what to do from there. I will continue to read up on other howtos during next week to get a better understanding of how Qemu works. If you can think of anything else, please let me know. If not, I guess that I have to capitulate :).
    Best regards /Anders

    1. I had the same!

      I don’t know why but the CD/DVD prompt disappears so fast that you may not see it on screen, instead you get the PXE prompt I also got. I just kept hitting Enter a few times after starting the VM. If you have a second keyboard, pass it through via USB and hit Enter on that keyboard. I know it’s a pain but it should work. Once Windows is installed you remove the ISO images and all will be fine.

      Another possibility: Make sure you configured your ISO images properly.

  21. Just saw your new screenshots – you got to the boot manager. Select the first UEFI QEMU DVD-ROM, if that doesn’t work, restart the VM and select the other. Once you selected the correct Windows ISO, the Windows installer should start and prompt you to load a driver. Use the virtio-stor driver (unless you configured your disk as a SCSI drive) and select the amd64 for a 64 bit OS.

    1. If I remember correctly, you need to specify 0x… before the vendor and product ID. There is a comment above somewhere that points that out.
      I need to recheck my tutorial on that, haven’t had the time yet.

    2. Thank you very much!

      Unfortunately I get the following errors:
      “(qemu) qemu-system-x86_64: AMD CPU doesn’t support hyperthreading. Please configure -smp options properly”
      Thankfully that one doesn’t prevent me from entering the VM, however Windows greets me with this error:
      “SYSTEM THREAD EXCEPTION NOT HANDLED”

      I have heard it is possible to get Windows to boot if I change the “host” in “-cpu host” to my cpu model, but I cannot figure out how to get qemu to produce a list of cpu models. Anyways, thank you for your guide, it helped me out a lot, although I might have to switch to Arch for more recent packages.

  22. You need to change your qemu script to match your CPU topology, such as:
    -machine q35,accel=kvm \
    -cpu host,kvm=off \
    -smp 4,sockets=1,cores=4,threads=1 \

    Above is for an Nvidia GPU passthrough.

    If that doesn’t work, and you don’t mind reinstalling Windows, you can try change the following line to:
    -machine pc,accel=kvm \

    If you prefer Arch Linux, there are guides out there and their documentation is great. If you prefer Ubuntu or similar, I’m sure there is a way to make it work.

    Regarding host, try in a terminal the following:
    qemu-system-x86_64 -cpu help
    to get the options.

    Can you share your hardware specs and the qemu command?

    1. Actually I haven’t been able to install Windows 10 due to the “system thread exception not handled” error.

      Anyways, my hardware:
      My cpu: Ryzen 5 1600X
      host gpu: GeForce GT 710
      guest gpu: Radeon RX 570
      Memory: 15.7 GiB

      And do you mean the window10vm.sh file? Right now, it looks like this:
      #!/bin/bash

      vmname=”windows10vm”

      if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
      echo “A passthrough VM is already running.” &
      exit 1

      else

      # use pulseaudio
      export QEMU_AUDIO_DRV=pa
      export QEMU_PA_SAMPLES=8192
      export QEMU_AUDIO_TIMER_PERIOD=99
      export QEMU_PA_SERVER=/run/user/1000/pulse/native

      cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd

      qemu-system-x86_64 \
      -name $vmname,process=$vmname \
      -machine pc,accel=kvm \
      -cpu host \
      -smp 4,sockets=1,cores=4,threads=1 \
      -m 8G \
      -balloon none \
      -rtc clock=host,base=localtime \
      -vga none \
      -nographic \
      -serial none \
      -parallel none \
      -soundhw hda \
      -usb \
      -device usb-host,vendorid=0x093a,productid=0x2521 \
      -device usb-host,vendorid=0x04f2,productid=0x1112 \
      -device vfio-pci,host=09:00.0,multifunction=on \
      -device vfio-pci,host=09:00.1 \
      -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
      -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
      -boot order=dc \
      -drive id=disk0,if=virtio,cache=none,format=raw,file=/media/burger/win.img \
      -drive file=/home/burger/Downloads/Win10_1809Oct_English_x64.iso,index=1,media=cdrom \
      -drive file=/home/burger/Downloads/virtio-win-0.1.160.iso,index=2,media=cdrom \
      #-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
      #-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

      exit 0
      fi

    1. I was able to install Windows 10 by doing the following:
      sudo nano /etc/modprobe.d/kvm.conf
      add “options kvm ignore_msrs=1”
      reboot

      Later, changing “-machine q35” to “-machine pc” stopped it from randomly crashing.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.