Running Windows 10 on Linux using KVM with VGA Passthrough

The Need

You want to use Linux as your main operating system, but still need Windows for certain applications unavailable under Linux. You need top notch (3D) graphics performance under Windows that you can’t get from VirtualBox or similar virtualization solutions. And you do not want to dual-boot into Linux or Windows. In that case read on.

Many modern CPUs have built-in features that improve the performance of virtual machines (VM), up to the point where virtualised systems are indistinguishable from non-virtualised systems. This allows us to create virtual machines on a Linux host platform without compromising performance of the (Windows) guest system.

For some benchmarks of my current system, see Windows 10 Virtual Machine Benchmarks

The Solution

In the tutorial below I describe how to install and run Windows 10 as a KVM virtual machine on a Linux Mint or Ubuntu host. The tutorial uses a technology called VGA passthrough (also referred to as “vfio” for the vfio driver used) which provides near-native graphics performance in the VM. I’ve been doing VGA passthrough since summer 2012, first running Windows 7 on a Xen hypervisor, switching to KVM and Windows 10 in December 2015. The performance – both graphics and computing – under Xen and KVM has been nothing less than stellar!

The tutorial below will only work with suitable hardware! If your computer does not fulfill the basic hardware requirements outlined below, you won’t be able to make it work.

The tutorial is not written for the beginner! I assume that you do have some Linux background, at least enough to be able to restore your system when things go wrong.

I am also providing links to other, similar tutorials that might help. Last not least, you will find links to different forums and communities where you can find further information and help.

Note: The tutorial was originally posted on the Linux Mint forum. However, limitations in terms of length and content (no. of images allowed) finally compelled me to host it on my own blog site.


All information and data provided in this tutorial is for informational purposes only. I make no representations as to accuracy, completeness, currentness, suitability, or validity of any information in this tutorial and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. All information is provided on an as-is basis.

You are aware that by following this tutorial you may risk the loss of data, or may render your computer inoperable. Backup your computer! Make sure that important documents/data are accessible elsewhere in case your computer becomes inoperable.

For a glossary of terms used in the tutorial, see Glossary of Virtualization Terms.


Note for Ubuntu users: My tutorial uses the “xed” command found in Linux Mint Mate to edit documents. You will have to replace it with “gedit” or whatever editor you use in Ubuntu/Xubuntu/Lubuntu…

Important Note: This tutorial has been edited to reflect the latest Linux Mint 19 and Ubuntu 18.04 syntax. If you use an older release, make sure to use the appropriate commands.

Part 1 – Hardware Requirements

For this tutorial to succeed, your computer hardware must fulfill all of the following requirements:

IOMMU support

In Intel jargon its called VT-d. AMD calls it variously AMD Virtualization, AMD-V, or Secure Virtual Machine (SVM). Even IOMMU has surfaced. If you plan to purchase a new PC/CPU, check the following websites for more information:

Unfortunately IOMMU support, specifically ACS support, varies greatly between different Intel CPUs and CPU generations. Generally speaking, Intel provides better ACS or device isolation capabilities for its Xeon and LGA2011 high-end desktop CPUs than for other VT-d enabled CPUs.

The first link above provides a non-comprehensive list of CPU/motherboard/GPU configurations where users were successful with VGA passthrough. When building a new PC, make sure you purchase components that support VGA passthrough.

Most PC / motherboard manufacturers disable IOMMU by default. You will have to enable it in the BIOS. To check your current CPU / motherboard IOMMU support and enable it, do the following:

  1. Reboot your PC and enter the BIOS setup menu (usually you press F2, DEL, or similar during boot to enter the BIOS setup).
  2. Search for IOMMU, VT-d, SVM, or “virtualization technology for directed IO” or whatever it may be called on your system. Turn on VT-d / IOMMU.
  3. Save and Exit BIOS and boot into Linux.
  4. Edit the /etc/default/grub file (you need root permission to do so). Open a terminal window (Ctrl+Alt+T) and enter (copy/paste):
    xed admin:///etc/default/grub
    (use gksudo gedit /etc/default/grub for older Linux Mint/Ubuntu releases)
    Here is my /etc/default/grub file before the edit:
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`

    Look for the line that starts with GRUB_CMDLINE_LINUX_DEFAULT=”…”. You need to add one of the following options to this line, depending on your hardware:
    Intel CPU:
    AMD CPU:
    Save the file and exit. Then type:
    sudo update-grub
  5. Now check that IOMMU is actually supported. Reboot the PC. Open a terminal window.
    On AMD machines use:
    dmesg | grep AMD-Vi
    The output should be similar to this:

    AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
    AMD-Vi: Lazy IO/TLB flushing enabled
    AMD-Vi: Initialized for Passthrough Mode

    Or use:
    cat /proc/cpuinfo | grep svm

On Intel machines use:
dmesg | grep "Virtualization Technology for Directed I/O"

The output should be this:
[ 0.902214] DMAR: Intel(R) Virtualization Technology for Directed I/O

If you do not get this output, then VT-d or AMD-V is not working – you need to fix that before you continue! Most likely it means that your hardware (CPU) doesn’t support IOMMU, in which case there is no point continuing this tutorial 😥 . Check again to make sure your CPU supports IOMMU. If yes, the cause may be a faulty motherboard BIOS. See if you find a newer version and update your motherboard BIOS (be careful, flashing the BIOS can potentially brick your motherboard).

Two graphics processors

In addition to a CPU and motherboard that supports IOMMU, you need two graphics processors (GPU):

1. One GPU for your Linux host (the OS you are currently running, I hope);

2. One GPU (graphics card) for your Windows guest.

We are building a system that runs two operating systems at the same time. Many resources like disk space, memory, etc. can be switched forth and back between the host and the guest, as needed. Unfortunately the GPU cannot be switched or shared between the two OS (work is being done to overcome this).

If, like me, you use Linux for the everyday stuff such as emails, web browsing, documents, etc., and Windows for gaming, photo or video editing, you’ll have to give Windows a more powerful GPU, while Linux will run happily with an inexpensive GPU, or the integrated graphics processor (IGP).

UEFI support in the GPU used with Windows

In this tutorial I use UEFI to boot the Windows VM. That means that the graphics card you are going to use for the Windows guest must support UEFI – most newer cards do. You can check here if your video card and BIOS support UEFI. If you run Windows, download and run GPU-Z and see if there is a check mark next to UEFI. (For more information, see here.)

Nvidia GTX 970
GPU-Z with graphics card details

There are several advantages to UEFI, namely it starts faster and overcomes some issues associated with legacy boot (Seabios).

If you plan to use the Intel IGD (integrated graphics device) for your Linux host, UEFI boot is the way to go. UEFI overcomes the VGA arbitration problem associated with the IGD and the use of the legacy Seabios.
If, for some reason, you cannot boot the VM using UEFI, and you want to use the Intel IDG for the host, you need to compile the i915 VGA arbiter patch into the kernel. Before you do, check the note below. For more on VGA arbitration, see here. For the i915 VGA arbiter patch, look here or under Part 15 – References.

Note: If your GPU does NOT support UEFI, there is still hope. You might be able to find a UEFI BIOS for your card at TechPowerUp Video BIOS Collection. A Youtube blogger calling himself Spaceinvader has produced a very helpful video on using a VBIOS.

If there is no UEFI video BIOS for your Windows graphics card, you will have to look for a tutorial using the Seabios method. It’s not much different from this here, but there are some things to consider.

Laptop users with Nvidia Optimus technology: Misairu_G (username) published an in-depth guide to VGA passthrough on laptops using Nvidia Optimus technology – see GUIDE to VGA passthrough on Nvidia Optimus laptops. (For reference, here some older posts on the subject:

Part 2 – Installing Qemu / KVM

The Qemu release shipped with Linux Mint 19 is version 2.11 and supports the latest KVM features.

In order to have Linux Mint “remember” the installed packages, use the Software Manager to install the following packages:


Linux Mint
Software Manager

For AMD Ryzen, see also here (note that Linux Mint 19/Ubuntu 18.04 only require the BIOS update).

Alternatively, use
sudo apt install qemu-kvm qemu-utils seabios ovmf hugepages cpu-checker
to install the required packages.

Part 3 – Determining the Devices to Pass Through to Windows

We need to find the PCI ID(s) of the graphics card and perhaps other devices we want to pass through to the Windows VM. Normally the IGP (the GPU inside the processor) will be used for Linux, and the discrete graphics card for the Windows guest. My CPU does not have an integrated GPU, so I use 2 graphics cards. Here my hardware setup:

  • GPU for Linux: Nvidia Quadro 2000 residing in the first PCIe graphics card slot.
  • GPU for Windows: Nvidia GTX 970 residing in the second PCIe graphics card slot.

To determine the PCI bus number and PCI IDs, enter:
lspci | grep VGA

Here is the output on my system:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation Device 13c2 (rev a1)

The first card under 01:00.0 is the Quadro 2000 I want to use for the Linux host. The other card under 02:00.0 I want to pass to Windows.

Modern graphics cards usually come with an on-board audio controller, which we need to pass through as well. To find its ID, enter:
lspci -nn | grep 02:00.

Substitute “02:00.” with the bus number of the graphics card you wish to pass to Windows, without the trailing “0“. Here is the output on my computer:
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c2] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)

Write down the bus numbers (02:00.0 and 02:00.1 above), as well as the PCI IDs (10de:13c2 and 10de:0fbb in the example above).

Now check to see that the graphics card resides within its own IOMMU group:
find /sys/kernel/iommu_groups/ -type l

For a sorted list, use:
for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort --version-sort

Look for the bus number of the graphics card you want to pass through. Here is the (shortened) output on my system:


Make sure the GPU and perhaps other PCI devices you wish to pass through reside within their own IOMMU group. In my case the graphics card and its audio controller designated for passthrough both reside in IOMMU group 21. No other PCI devices reside in this group, so all is well.

If your VGA card shares an IOMMU group with other PCI devices, see IOMMU GROUP CONTAINS ADDITIONAL DEVICES for a solution!

Next step is to find the mouse and keyboard (USB devices) that we want to assign to the Windows guest. Remember, we are going to run 2 independent operating systems side by side, and we control them via mouse and keyboard.

About keyboard and mouse

Depending whether and how much control you want to have over each system, there are different approaches:

1. Get a USB-KVM (Keyboard/VGA/Mouse) switch. This is a small hardware device with usually 2 USB ports for keyboard and mouse as well as a VGA or (the more expensive) DVI or HDMI graphics outputs. In addition the USB-KVM switch has two USB cables and 2 VGA/DVI/HDMI cables to connect to two different PCs. Since we run 2 virtual PCs on one single system, this is viable solution. See also my Virtualization Hardware Accessories post.
– Works without special software in the OS, just the usual mouse and keyboard drivers;
– Best in performance – no software overhead.
– Requires extra (though inexpensive) hardware;
– More cable clutter and another box with cables on your desk;
– Requires you to press a button to switch between host and guest and vice versa;
– Need to pass through a USB port or controller – see below on IOMMU groups.

2. Without spending a nickel you can simply pass through your mouse and keyboard when the VM starts.
– Easy to implement;
– No money to invest;
– Good solution for setting up Windows.
– Once the guest starts, your mouse and keyboard only control that guest, not the host. You will have to plug them into another USB port to gain access to the host.

3. Synergy ( is a commercial software solution that, once installed and configured, allows you to interact with two PCs or virtual machines.
– Most versatile solution, especially with dual screens;
– Software only, easy to configure;
– No hardware purchase required.
– Requires the installation of software on both the host and the guest;
– Doesn’t work during Windows installation (see option 2);
– Costs $10 for a Basic, lifetime license;
– May produce lag, although I doubt you’ll notice unless there is something wrong with the bridge configuration.

4. “Multi-device” bluetooth keyboard and mouse that can connect to two different devices and switch between them at the press of a button (see for example here):
– Most convenient solution;
– Same performance as option 1.
– Price.
– Make sure the device supports Linux, or that you can return it if it doesn’t!

I first went with option 1 for robustness and universality, but have replaced it with option 4. I’m now using a Logitech MX master BT mouse and a Logitech K780 BT keyboard. See here for how to pair these devices to the USB dongles.

Both option 1 and 4 usually require to pass through a USB PCI device to the Windows guest. I had a need for both USB2 and USB3 ports in my Windows VM and I was able to pass through two USB controllers to my Windows guest, using PCI passthrough.

For the VM installation we choose option 2 (see above), that is we pass our keyboard and mouse through to the Windows VM. To do so, we need to identify their USB ID:

Here my system output (truncated):

Bus 010 Device 006: ID 045e:076c Microsoft Corp. Comfort Mouse 4500
Bus 010 Device 005: ID 045e:0750 Microsoft Corp. Wired Keyboard 600

Note down the IDs: 045e:076c and 045e:0750 in my case.

Part 4 – Prepare for Passthrough

We are assigning a dummy driver vfio-pci to the graphics card we want to use under Windows. To do that, we first have to prevent the default driver to bind to the graphics card.

Once more edit the /etc/default/grub file:
xed admin:///etc/default/grub

The entry we are looking for is:

GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on”

We need to add one of the following options to this line, depending on your hardware:

Nvidia graphics card for Windows VM:

Note: This blacklists the Nouveau driver and prevents it from loading. If you run two Nvidia cards and use the open Nouveau driver for your Linux host, DON’T blacklist the driver!!! Chances are the tutorial will work since the vfio-pci driver should grab the graphics card before nouveau takes control of it. The same goes for AMD cards (see below).

On some platforms, the above modprobe.blacklist=nouveau option won’t be enough. A service called nvidia-fallback.service may be running and needs to be disabled. To check for this service, use:
systemctl list-units --type service | grep -i nvidia-fallback

If the nvidia-fallback service shows up, disable it via:
systemctl disable nvidia-fallback.service
(Thanks Woody!)

AMD graphics card for Windows VM:
depending on which driver loads when you boot.

After editing, my /etc/default/grub file now looks like this:
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=”modprobe.blacklist=nouveau quiet intel_iommu=on”

Finally run update-grub:
sudo update-grub

(There are more ways to blacklist driver modules, for example using Kernel Mode Settings. For more on Kernel Mode Setting, see

In order to make the graphics card available to the Windows VM, we will assign a “dummy” driver as a place holder: vfio-pci.

Note: If you have two identical graphics cards for both the host and the VM, the method below won’t work. In that case see the following posts: as well as

Open or create /etc/modprobe.d/local.conf:
xed admin:///etc/modprobe.d/local.conf
and insert the following:
options vfio-pci ids=10de:13c2,10de:0fbb
where 10de:13c2 and 10de:0fbb are the PCI IDs for your graphics card’ VGA and audio part.
Save the file and exit the editor.

Some applications like Passmark require the following option:
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf

To load vfio and other required modules at boot, edit the /etc/initramfs-tools/modules file:
xed admin:///etc/initramfs-tools/modules

At the end of the file add in the order listed below:

Save and close the file.

We need to update the initramfs. Enter at the command line:
update-initramfs -u

Part 5 – Network Settings

For performance reasons it is best to create a virtual network bridge that connects the VM with the host. In a separate post I have written a detailed tutorial on how to set up a bridge using Network Manager.

Note: Bridging only works for wired networks. If your PC is connected to a router via a wireless link (Wifi), you won’t be able to use a bridge. There are workarounds such as routing or ebtables (see

Once you’ve setup the network, reboot the computer and test your network configuration – open your browser and see if you have Internet access.

Part 6 – Setting up Hugepages

Moved to Part 18 – Performance Tuning. This is a performance tuning measure and not required to run Windows on Linux. See Configure Hugepages under Part 18 – Performance Tuning.

Part 7 – Download the VFIO drivers

Download the VFIO driver ISO to be used with the Windows installation from Below are the direct links to the ISO images:

Latest VIRTIO drivers:

Stable VIRTIO drivers:

I chose the latest driver ISO.

Part 8 – Prepare Windows VM Storage Space

We need some storage space on which to install the Windows VM. There are several choices:

  1. Create a raw image file.
    – Easy to implement;
    – Flexible – the file can grow with your requirements;
    – Snapshots;
    – Easy migration;
    – Good performance.
    – Takes up the entire space you specify.
  2. Create a dedicated LVM volume.
    – Familiar technology (at least to me);
    – Excellent performance, like bare-metal;
    – Flexible – you can add physical drives to increase the volume size;
    – Snapshots;
    – Mountable within Linux host using kpartx.
    – Takes up the entire space specified;
    – Migration isn’t that easy.
  3. Pass through a PCI SATA controller / disk.
    – Excellent performance, using original Windows disk drivers;
    – Allows the use of Windows virtual drive features;
    – Can use an existing bare-metal installation of Windows in a VM;
    – Possibility to boot Windows directly, i.e. not as VM;
    – Possible to add more drives.
    – The PC needs at least two discrete SATA controllers;
    – Host has no access to disk while VM is running;
    – Requires a dedicated SATA controller and drive(s);
    – SATA controller must have its own IOMMU group;
    – Possible conflicts in Windows between bare-metal and VM operation.

For further information on these and other image options, see here:

Although I’m using an LVM volume, I suggest you start with the raw image. Let’s create a raw disk image:
fallocate -l 50G /media/user/win.img
Note: Adjust size and path to match your needs and actual resources.

Part 9 – Check Configuration

It’s best to check that we got everything:

KVM: kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

KVM module: lsmod | grep kvm
kvm_intel 200704 0
kvm 593920 1 kvm_intel
irqbypass 16384 2 kvm,vfio_pci

Above is the output for the Intel module.

VFIO: lsmod | grep vfio
vfio_pci 45056 0
vfio_virqfd 16384 1 vfio_pci
irqbypass 16384 2 kvm,vfio_pci
vfio_iommu_type1 24576 0
vfio 32768 2 vfio_iommu_type1,vfio_pci

QEMU: qemu-system-x86_64 --version
You need QEMU emulator version 2.5.0 or newer. On Linux Mint 19 / Ubuntu 18.04 the QEMU version is 2.11.

Did vfio load and bind to the graphics card?
lspci -kn | grep -A 2 02:00
where 02:00 is the bus number of the graphics card to pass to Windows. Here the output on my PC:
02:00.0 0300: 10de:13c2 (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci
02:00.1 0403: 10de:0fbb (rev a1)
Subsystem: 1458:3679
Kernel driver in use: vfio-pci

Kernel driver in use is vfio-pci. It worked!

Interrupt remapping: dmesg | grep VFIO
[ 3.288843] VFIO – User Level meta-driver version: 0.3
All good!

If you get this message:
vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param “allow_unsafe_interrupts” to enable VFIO IOMMU support on this platform
enter the following command in a root terminal (or use sudo -i):
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio_iommu_type1.conf
In this case you need to reboot once more.

Part 10 – Create Script to Start Windows

I’ve modified a script that will start the Windows VM. Copy the script below and safe it as (or whatever name you like, just keep the .sh extension):



if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
echo "A passthrough VM is already running." &
exit 1


# use pulseaudio
export QEMU_AUDIO_DRV=pa
export QEMU_PA_SAMPLES=8192
export QEMU_PA_SERVER=/run/user/1000/pulse/native

cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd

qemu-system-x86_64 \
-name $vmname,process=$vmname \
-machine type=q35,accel=kvm \
-cpu host,kvm=off \
-smp 4,sockets=1,cores=2,threads=2 \
-m 8G \
-balloon none \
-rtc clock=host,base=localtime \
-vga none \
-nographic \
-serial none \
-parallel none \
-soundhw hda \
-usb -usb-host,vendorid=045e,productid=076c \
-usb-host,vendorid=045e,productid=0750 \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=dc \
-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img \
-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

exit 0

Make the file executable:
sudo chmod +x

You need to edit the file and change the settings and paths to match your CPU and configuration. See below for explanations on the qemu-system-x86 options:

-name $vmname,process=$vmname
Name and process name of the VM. The process name is displayed when using ps -A to show all processes, and used in the script to determine if the VM is already running. Don’t use win10 as process name, for some inexplicable reason it doesn’t work!

-machine type=q35,accel=kvm
This specifies a machine to emulate. The accel=kvm option tells qemu to use the KVM acceleration – without it the Windows guest will run in qemu emulation mode, that is it’ll run real slow.
I have chosen the type=q35 option, as it improved my SSD read and write speeds. See In some cases type=q35 will prevent you from installing Windows, instead you may need to use type=pc,accel=kvm. See the post here. To see all options for type=…, enter the following command:
qemu-system-x86_64 -machine help
Important: Several users passing through Radeon RX 480 and Radeon RX 470 cards have reported reboot loops after updating and installing the Radeon drivers. If you pass through a Radeon graphics card, it is better to replace the -machine line in the startup script with the following line:
-machine type=pc,accel=kvm
to use the default i440fx emulation.
Note for IGD users: If you have an Intel CPU with internal graphics (IGD), and want to use the Intel IGD for Windows, there is a new option to enable passthrough:
igd-passthru=on|off controls IGD GFX passthrough support (default=off).
In most cases you will want to use a discrete graphics card with Windows.

-cpu host,kvm=off
This tells qemu to emulate the host’s exact CPU. There are more options, but it’s best to stay with host.
The kvm=off option is only needed for Nvidia graphics cards – if you have an AMD/Radeon card for your Windows guest, you can remove that option and specify -cpu host.

-smp 4,sockets=1,cores=2,threads=2
This specifies multiprocessing. -smp 4 tells the system to use 4 (virtual) processors. My CPU has 6 cores, each supporting 2 threads, which makes a total of 12 threads. It’s probably best not to assign all CPU resources to the Windows VM – the host also needs some resources (remember that some of the processing and I/O coming from the guest takes up CPU resources in the host). In the above example I gave Windows 4 virtual processors. sockets=1 specifies the number of actual CPU sockets qemu should assign, cores=2 tells qemu to assign 2 processor cores to the VM, and threads=2 specifies 2 threads per core. It may be enough to simply specify -smp 4, but I’m not sure about the performance consequences (if any).
If you have a 4-core Intel CPU with hyperthreading, you can specify -smp 6,sockets=1,cores=3,threads=2 to assign 75% of your CPU resources to the Windows VM. This should usually be enough even for demanding games and applications.

-m 8G
The -m option assigns memory (RAM) to the VM, in this case 8 GByte. Same as -m 8192. You can increase or decrease it, depending on your resources and needs. With modern Windows releases it doesn’t make sense to give it less than 4G, unless you are really stretched with RAM. If you use hugepages, make sure your hugepage size matches this!

-mem-path /dev/hugepages
This tells qemu where to find the hugepages we reserved. If you haven’t configured hugepages, you need to remove this option.

Preallocates the memory we assigned to the VM.

-balloon none
We don’t want memory ballooning (as far as I know Windows won’t support it anyway).

-rtc clock=host,base=localtime
-rtc clock=host tells qemu to use the host clock for synchronization. base=localtime allows the Windows guest to use the local time from the host system. Another option is utc.

-vga none
Disables the built in graphics card emulation. You can remove this option for debugging.

Totally disables SDL graphical output. For debugging purposes, remove this option if you don’t get to the Tiano Core screen.

-serial none
-parallel none
Disable serial and parallel interfaces. Who needs them anyway?

-soundhw hda
Together with the export QEMU_AUDIO_DRV=pa shell command, this option enables sound through PulseAudio.
If you want to pass through a physical audio card or audio device and stream audio from your Linux host to your Windows guest, see here: Streaming Audio from Linux to Windows.

-usb -usb-host,vendorid=045e,productid=076c
-usb enables USB support and -usb-host… assigns the USB host devices mouse (045e:076c) and keyboard (045e:0750) to the guest. Replace the device IDs with the ones you found using the lsusb command in Part 3 above!
Note the new syntax. There are also many more options that you can find here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.
There are two options to assign host devices to guests. Here the syntax of both:
Pass through the host device identified by bus and addr.

Pass through the host device identified by vendor and product ID

-device vfio-pci,host=02:00.0,multifunction=on
-device vfio-pci,host=02:00.1
Here we specify the graphics card to pass through to the guest, using vfio-pci. Fill in the PCI IDs you found under Part 3 above. It is a multifunction device (graphics and sound). Make sure to pass through both the video and the sound part (02:00.0 and 02:00.1 in my case).

-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd
Specifies the location and format of the bootable OVMF UEFI file. This file doesn’t contain the variables, which are loaded separately (see right below).

-drive if=pflash,format=raw,file=/tmp/my_vars.fd
These are the variables for the UEFI boot file, which were copied by the script to /tmp/my_vars.fd.

-boot order=dc
Start boot from CD (d), then first hard disk (c). After installation of Windows you can remove the “d” to boot straight from disk.

-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img
Defines the first hard disk. With the options above it will be accessed as a paravirtualized (if=virtio) drive in raw format (format=raw).
Important: file=/… enter the path to your previously created win.img file.
Other possible drive options are file=/dev/mapper/group-vol for LVM volumes, or file=/dev/sdx1 for entire disks or partitions.
For a list of useful -drive options, see my post here.

-drive file=/home/user/ISOs/win10.iso,index=1,media=cdrom \
This attaches the Windows win10.iso as CD or DVD. The driver used is the ide-cd driver.
Important: file=/… enter the path to your Windows ISO image.
Note: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=2,media=cdrom \
This attaches the virtio ISO image as CD. Note the different index.
Important: file=/… enter the path to your virtio ISO image. If you downloaded it to the default location, it should be in your Downloads directory.
Note 1: There are many ways to attach ISO images or drives and invoke drivers. My system didn’t want to take a second scsi-cd device, so this option did the job. Unless this doesn’t work for you, don’t change.
Note 2: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
Defines the network interface and network driver. It’s best to define a MAC address, here 00:16:3e:00:01:01. The MAC is specified in Hex and you can change the last :01:01 to your liking. Make sure no two MAC addresses are the same!
vhost=on is optional – some people reported problems with this option. It is for network performance improvement.
For more information: and

Important: Documentation on the installed QEMU can be found here: file:///usr/share/doc/qemu-system-common/qemu-doc.html.

The latest stable version is 2.11. For additional documentation on QEMU, see Some configuration examples can be found in the following directory:

Part 11 – Install Windows

Start the VM by running the script as root:
sudo ./
(Make sure you specify the correct path.)

You should get a Tiano Core splash screen with the memory test result.

You might land in an EFI shell. Type exit and you should be getting a menu. Select the boot disk and hit Enter.

Then the Windows ISO boots and asks you to:
Press any key to start the CD / DVD…

Press a key!

Windows will then ask you to:
Select the driver to install

Click “Browse”, then select your VFIO ISO image and go to “viostor“, open and select your Windows version (w10 for Windows 1o), then select the “AMD64” version for 64 bit systems, click OK. (Note: Instead of the viostor driver, you can also install the vioscsi driver. See qemu documentation for proper syntax in the qemu command.)

Windows will ask for the license key, and you need to specify how to install – choose “Custom”. Then select your drive (there should be only disk0) and install.

Windows may reboot several times. When done rebooting, open Device Manager and select the Network interface. Right-click and select update. Then browse to the VFIO disk and install NetKVM.

Windows should be looking for a display driver by itself. If not, install it manually.

Note: In my case, Windows did not correctly detect my drives being SSD drives. Not only will Windows 10 perform unnecessary disk optimization tasks, but these “optimizations” can actually lead to reduced SSD life and performance issues. To make Windows 10 determine the correct disk drive type, do the following:

1. Inside Windows 10, right-click the Start menu.
2. Select “Command prompt (admin)”.
3. At the command prompt, run:
winsat formal
4. It will run a while and then print the Windows Experience Index (WEI).
5. Please share your WEI in a comment below!

To check that Windows correctly identified your SSD:
1. Open Explorer
2. Click “This PC” in the left tab.
3. Right-click your drive (e.g. C:) and select “Properties”.
4. Select the “Tools” tab.
5. Click “Optimize”
You should see something similar to this:

SSD optimization
Use Optimize Drives to optimize for SSD

In my case, I have drive C: (my Windows 10 system partition) and a “Recovery” partition located on an SSD, the other two partitions (“photos” and “raw_photos”) are using regular hard drives (HDD). Notice the “Optimization not available” 😀 .

Turn off hibernation and suspend ! Having either of them enabled can cause your Windows VM to hang, or may even affect the host. To turn off hibernation and suspend, follow the instructions for hibernation and suspend.

Turn off fast startup ! When you shut down the Windows VM, fast startup leaves the file system in a state that is unmountable by Linux. If something goes wrong, you’re screwed. NEVER EVER let proprietary technology have control over your data. Follow these instructions to turn off fast startup.

By now you should have a working Windows VM with VGA passthrough.

Part 12 – Troubleshooting

Below are a number of common issues when trying to install/run Windows in a VGA passthrough environment.

VM not starting – graphics driver

A common issue is the binding of a driver to the graphics card we want to pass through. As I was writing this how-to and made changes to my (previously working) system, I suddenly couldn’t start the VM anymore. The first thing to check if you don’t get a black Tianocore screen is whether or not the graphics card you try to pass through is bound to the vfio-pci driver:
dmesg | grep -i vfio
The output should be similar to this:
[ 2.735931] VFIO – User Level meta-driver version: 0.3
[ 2.757208] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000
[ 2.773223] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000
[ 8.437128] vfio-pci 0000:02:00.0: enabling device (0000 -> 0003)

The above example shows that the graphics card is bound to the vfio-pci driver (see last line), which is what we want. If the command doesn’t produce any output, or a very different one from above, something is wrong. To check further, enter:
lspci -k | grep -i -A 3 vga
Here is what I got when my VM wouldn’t want to start anymore:
01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
Subsystem: NVIDIA Corporation GF106GL [Quadro 2000]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361

02:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GM204 [GeForce GTX 970]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361

Graphics card 01:00.0 (Quadro 2000) uses the Nvidia driver – just what I want.

Graphics card 02:00.0 (GTX 970) also uses the Nvidia driver – that is NOT what I was hoping for. This card should be bound to the vfio-pci driver. So what do we do?

In Linux Mint, click the menu button, click “Control Center”, then click “Driver Manager” in the Administration section. Enter your password. You will then see the drivers associated with the graphics cards. Change the driver of the graphics card so it will use the opensource driver (in this example “Nouveau”) and press “Apply Changes”. After the change, it should look similar to the photo below:

Driver Manager
Driver Manager in Linux Mint

If above doesn’t help, or if you can’t get rid of the nouveau driver, see if the nvidia-fallback.service is running. If yes, it will load the open-source nouveau driver whenever it can’t find the Nvidia proprietary driver. You need to disable it by running the following command as sudo:

systemctl disable nvidia-fallback.service

BSOD when installing AMD Crimson drivers under Windows

Several users on the Redhat VFIO mailing list have reported problems with the installation of AMD Crimson drivers under Windows. This seems to affect a number of AMD graphics cards, as well as a number of different AMD Crimson driver releases. A workaround is described here:

In this workaround the following line is added to the startup script, right above the definition of the graphics device:
-device ioh3420,bus=pci,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1

Should the above configuration give you a “Bus ‘pci’ not found” error, change the line as follows:
-device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1

Then you change the graphics card passthrough options as follows:
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \

Identical graphics cards for host and guest

If you use two identical graphics cards for both the Linux host and the Windows guest, follow these instructions:

Modify the /etc/modprobe.d/local.conf file as follows:

install vfio-pci /sbin/

Create a /sbin/ file with the following content:


DEVS="0000:02:00.0 0000:02:00.1"

for DEV in $DEVS; do
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override

modprobe -i vfio-pci

Make the file executable:

chmod u+x /sbin/

Windows ISO won’t boot – 1

If you can’t start the Windows ISO, it may be necessary to run a more recent version of Qemu to get features or work-arounds that solve problems. If you require a more updated version of Qemu (version 2.12 as of this update), add the following PPA (warning: this is not an official repository – use at your own risk). At the terminal prompt, enter:
sudo add-apt-repository ppa:jacob/virtualisation

Windows ISO won’t boot – 2

Sometimes the OVMF BIOS files from the official Ubuntu repository don’t work with your hardware and the VM won’t boot. In that case you can download alternative OVMF files from here:, or get the most updated version from here:
Download the latest edk2.git-ovmf-x64 file, as of this update it is “edk2.git-ovmf-x64-0-20180807.221.g1aa9314e3a.noarch.rpm” for a 64bit installation. Open the downloaded .rpm file with root privileges and unpack to /.
Copy the following files:
sudo cp /usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd /usr/share/ovmf/OVMF.fd

sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd /usr/share/OVMF/OVMF_CODE.fd

sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /usr/share/OVMF/OVMF_VARS.fd

Check to see if the /usr/share/ovmf/OVMF.fd link exists, if not, create it:
sudo ln -s '/usr/share/ovmf/OVMF.fd' '/usr/share/qemu/OVMF.fd'

Windows ISO won’t boot – 3

Sometimes the Windows ISO image is corrupted or simply an old version that doesn’t work with passthrough. Go to and download the ISO you need (see your software license). Then try again.

Motherboard BIOS bugs

Some motherboard BIOSes have bugs and prevent passthrough. Use “dmesg” and look for entries like these:
[ 0.297481] [Firmware Bug]: AMD-Vi: IOAPIC[7] not in IVRS table
[ 0.297485] [Firmware Bug]: AMD-Vi: IOAPIC[8] not in IVRS table
[ 0.297487] [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found in IVRS table
[ 0.297490] AMD-Vi: Disabling interrupt remapping due to BIOS Bug(s)
If you find entries that point to a faulty BIOS or problems with interrupt remapping, go to Easy solution to get IOMMU working on mobos with broken BIOSes. (All credits go to leonmaxx on the Ubuntu forum!)

Intel IGD and arbitration bug

For users of Intel CPUs with IGD (Intel graphics device): The Intel i915 driver has a bug, which has necessitated a kernel patch named i915 vga arbiter patch. According to developer Alex Williamson, this patch is needed any time you have host Intel graphics and make use of the x-vga=on option. This tutorial, however, does NOT use the x-vga option; the tutorial is based on UEFI boot and doesn’t use VGA. That means you do NOT need the i915 vga arbiter patch! See

In some case you may need to stop the i915 driver from loading by adding nomodeset to the following line in /etc/default/grub as shown:
GRUB_CMDLINE_LINUX_DEFAULT=”quiet nomodeset intel_iommu=on”

Then run:
sudo update-grub

nomodeset prevents the kernel from loading video drivers and tells it to use BIOS modes instead, until X is loaded.

IOMMU group contains additional devices

When checking the IOMMU groups, your graphics card’ video and audio part should be the only 2 entries under the respective IOMMU group. The same goes for any other PCI device you want to pass through, as you must pass through all devices within an IOMMU group, or nothing. If – aside from the PCI device(s) you wish to pass to the guest – there are other devices within the same IOMMU group, see What if there are other devices in my IOMMU group for a solution.

For more information on IOMMU groups, see here.

Dual-graphics laptops (e.g. Optimus technology)

A quote from developer Alex Williamson:

OVMF is the way to go if you want to avoid patching your kernel, … if your GPU and guest OS support UEFI.

Dual-graphics laptops are tricky. There are no guarantees that any of this will work, but especially custom graphics cards on laptops. The discrete GPU may not be directly connected to any of the outputs, so “remoting” the graphics internally may be the only way to get to the guest desktop. It’s possible that the GPU does not have a discrete ROM, instead hiding it in ACPI or elsewhere to be extracted. Some hybrid graphics laptops require custom drivers from the vendor. The more integration it has into the system, probably the less likely that it will behave like a discrete desktop GPU.

That said, there is light on the horizon. Misairu_G (username on forum) published a Guide to VGA passthrough on Optimus laptops. You may want to consult that guide if you use a laptop with Nvidia graphics.

User bash64 on the Linux Mint forum has reported success with only minor modifications to this tutorial. The main deviations were:

  1. The nomodeset option in the /etc/default/grub file (see “Intel IGD and arbitration bug” above)
  2. Seabios instead of UEFI / ovmf
  3. Minor modifications to the qemu script file

Issues with Skylake CPUs

Another issue has come up with Intel Skylake CPUs. This problem is likely solved by now. Update to a recent kernel (e.g. 4.15), as described above.

In case the kernel upgrade doesn’t solve the issue, see for an available patch. Another possible solution can be found here:

If you haven’t found a solution to your problem, check the references in part 15. You are also welcome to leave a comment and I or someone else may come to the rescue.

Part 13 – Run Windows VM in user mode (non-root)

To run your Windows VM in user mode has become easy.

  1. Add your user to the kvm group:
    sudo usermod -a -G kvm myusername
    Note: Always replace “myusername” with your user name.
  2. Reboot (or logout and login) to see your user in the kvm group.
  3. If you use hugepages, make sure they are properly configured. Open your /etc/fstab file and compare your hugetlbfs configuration with the following:
    hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0
    Note that the gid=129 might be different for your system. The rest should be identical!Now enter the following command:
    getent group kvm
    This should return something like:
    The group ID (gid) number matches. If not, edit the fstab file to have the gid= entry match the gid number you got using getent.
  4. Edit the Windows VM start script and add the following line below the entry “cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd“:
    chown myusername:kvm /tmp/my_vars.fd
    Next add the following entry right under the qemu-system-x86_64 \ entry, on a separate line:
    -runas myusername \
    Save the file and start your Windows VM. You will still need sudo to run the script, since it performs some privileged tasks, but the guest will run in user mode with your user privileges.
  5. After booting into Windows, switch to the Linux host and run in a terminal:
    Your output should be similar to this:

    kvm Windows VM
    top shows Windows 10 VM running with user privileges

Notice the win10vm entry associated with my user name “heiko” instead of “root”.

Please report if you encounter problems (use comment section below).

For other references, see the following tutorial:

Part 14 – Passing more PCI devices to guest

If you wish to pass additional PCI devices through to your Windows guest, you must make sure that you pass through all PCI devices residing under the same IOMMU group. Moreover, DO NOT PASS root devices to your guest. To check which PCI devices resider under the same group, use the following command:
find /sys/kernel/iommu_groups/ -type l
The output on my system is:

As you can see in the above list, some IOMMU groups contain multiple devices on the PCI bus. I wanted to see which devices are in IOMMU group 17 and used the PCI bus ID:
lspci -nn | grep 00:1f.
Here is what I got:
00:1f.0 ISA bridge [0601]: Intel Corporation C600/X79 series chipset LPC Controller [8086:1d41] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller [8086:1d02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation C600/X79 series chipset SMBus Host Controller [8086:1d22] (rev 05)

All of the listed devices are used by my Linux host:
– The ISA bridge is a standard device used by the host. You do not pass it through to a guest!
– All my drives are controlled by the host, so passing through a SATA controller would be a very bad idea!
– Do NOT pass through a host controller, such as the C600/X79 series chipset SMBus Host Controller!

In order to pass through individual PCI devices, edit the VM startup script and insert the following code underneath the vmname=… line:

vfiobind() {
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id


Underneath the line containing “else”, insert:

cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line

You need to create a vfio-pci.cfg file in /etc containing the PCI bus numbers as follows:
Make sure the file does NOT contain any blank line(s). Replace the PCI IDs with the ones you found!

Part 15 – References

For documentation on qemu/kvm, see the following directory on your Linux machine: /usr/share/doc/qemu-system-common – a new “online news publication with a razor focus on virtualization and linux gaming, as well as developments in open source technology” – the Reddit r/VFIO subreddit to discuss all things related to VFIO and gaming on virtual machines in general – a well written tutorial offering qemu script and virt-manager as options – nice and detailed tutorial for a Ryzen based system using the Virtual Machine Manager GUI – a new Ubuntu 16.04 tutorial using virt-manager. – QEMU user manual – this gave me the inspiration – the best thread on kvm! – Ubuntu tutorial. – Arch Linux documentation on QEMU – by far the best. – PCI passthrough via OVMF tutorial for Arch Linux – provides excellent information. – a source for the ACS and i915 arbiter patches. – one of the developers, Alex provides invaluable information and advice. – Redhat is the key developer of kvm, their website has lots of information. – Arch Linux KVM page. – Suse Linux documentation on KVM – good reference. – haven’t tried it, but looks like a good tutorial. – tutorial with Youtube video to go along, very useful and up-to-date, including how to apply ACS override patch. and – if you want to play with virt-manager, you’ll need to dabble in libvirt

Below is the VM startup script I use, for reference only.
Note: The script is specific for my hardware. Don’t use it without modifying it!



vfiobind() {
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id


if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
zenity --info --window-icon=info --timeout=15 --text="A passthrough VM is already running." &
exit 1


cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line

# use pulseaudio
#export QEMU_AUDIO_DRV=pa
#export QEMU_PA_SAMPLES=8192
#export QEMU_PA_SERVER=/run/user/1000/pulse/native
#export QEMU_PA_SINK=alsa_output.pci-0000_06_04.0.analog-stereo
#export QEMU_PA_SOURCE=input

#use ALSA
export QEMU_AUDIO_DRV=alsa

cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
chown heiko:kvm /tmp/my_vars.fd

#taskset -c 0-9
qemu-system-x86_64 \
-runas heiko \
-monitor stdio \
-serial none \
-parallel none \
-nodefaults \
-nodefconfig \
-name $vmname,process=$vmname \
-machine q35,accel=kvm,kernel_irqchip=on \
-cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
-smp 12,sockets=1,cores=6,threads=2 \
-m 16G \
-mem-path /dev/hugepages \
-mem-prealloc \
-balloon none \
-rtc base=localtime,clock=host \
-vga none \
-nographic \
-soundhw hda \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-device vfio-pci,host=00:1a.0 \
-device vfio-pci,host=08:00.0 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=c \
-drive id=disk0,if=virtio,cache=none,format=raw,aio=native,discard=unmap,detect-zeroes=unmap,file=/dev/mapper/lm13-win10 \
-drive id=disk1,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/photos-photo_stripe \
-drive id=disk2,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/media-photo_raw \
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01


#-drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/virtio-win-0.1.140.iso,index=3,media=cdrom \
#-drive file=/media/heiko/tmp_stripe/OS-backup/ISOs/Win10_1803_English_x64.iso,index=4,media=cdrom \

#-global ide-drive.physical_block_size=4096 \
#-global ide-drive.logical_block_size=4096 \
#-global virtio-blk-pci.physical_block_size=512 \
#-global virtio-blk-pci.logical_block_size=512 \

exit 0


The command
taskset -c 0-9 qemu-system-x86_64...

pins the vCPUs of the guest to processor threads 0-9 (I have a 6-core CPU with 2 threads per core=12 threads). Here I assign 10 out of 12 threads to the guest. While the guest is running, the host will have to make due with only 1 core (2 threads). CPU pinning may improve performance of the guest.

Note: I am currently passing through all cores and threads, without CPU pinning. This seems to give me the best results in the benchmarks, as well as real-life performance.

Part 16 – Related Posts

Here a list of related posts:

Developments in Virtualization

Virtual Machines on Userbenchmark

qemu-system-x86_64 Drive Options

Part 17 – Benchmarks

I have a separate post showing Passmark benchmarks of my system.

Here the UserBenchmark results for my configuration:

UserBenchmarks: Game 60%, Desk 71%, Work 64%
CPU: Intel Core i7-3930K – 79.7%
GPU: Nvidia GTX 970 – 60.4%
SSD: Red Hat VirtIO 140GB – 74.6%
HDD: Red Hat VirtIO 2TB – 64.6%
HDD: Red Hat VirtIO 2TB – 66.1%
RAM: QEMU 20GB – 98.2%
MBD: QEMU Standard PC (Q35 + ICH9, 2009)

Part 18 – Performance Tuning

I keep updating this chapter, so expect more tips to be added here in the future.

Enable Hyper-V Enlightenments

As funny as this sounds, this is another way to improve Windows performance under kvm. Hyper-V enlightenments are easy to implement: In the script that starts the VM, change the following line:
-cpu host,kvm=off \

-cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \

The above is one line! To check that it actually works, start your Windows VM and switch to Linux. Open a terminal window and enter (in one line):
ps -aux | grep qemu | grep -Eo 'hv_relaxed|hv_spinlocks=0x1fff|hv_vapic|hv_time'

You should get the following output:

For more on Hyper-V enlightenments, see here.

Configure hugepages

This step is not required to run the Windows VM, but helps improve performance. First we need to decide how much memory we want to give to Windows. Here my suggestions:

  1. No less than 4GB. Use 8GB or more for a Windows gaming VM.
  2. If you got 16GB total and aren’t running multiple VMs, give Windows 8GB-12GB, depending on what you plan to do with Windows.

For this tutorial I use 8GB. Hugepages are enabled by default in the latest releases of Linux Mint (since 18) and Ubuntu (since 16.04). For more information or if you are running an older release, see KVM – Using Hugepages.

Let’s see what we got:
hugeadm --explain

Total System Memory: 24108 MB

Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M

Huge page pools:
Size Minimum Current Maximum Default
2097152 0 0 0 *
1073741824 0 0 0

As you can see, hugepages are mounted to /dev/hugepages, and the default hugepage size is 2097152 Bytes/(1024*1024)=2MB.

Another way to get information about hugepages:
grep "Huge" /proc/meminfo

AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Here some math:
We want to reserve 8GB for Windows:
8GB = 8x1024MB = 8192MB

Our hugepage size is 2MB, so we need to reserve:
8192MB/2MB = 4096 hugepages

We configure the hugepage pool during boot. Open the file /etc/default/grub as root:
xed admin:///etc/default/grub

Look for the GRUB_CMDLINE_LINUX_DEFAULT=”…” line we edited before and add:

This is what I have:
GRUB_CMDLINE_LINUX_DEFAULT=”modprobe.blacklist=nouveau quiet intel_iommu=on hugepages=4096″

Save and close. Then run:
sudo update-grub

Now reboot for our hugepages configuration to take effect.

After the reboot, run in a terminal:
hugeadm --explain

Total System Memory: 24108 MB

Mount Point Options
/dev/hugepages rw,relatime,pagesize=2M

Huge page pools:
Size Minimum Current Maximum Default
2097152 4096 4096 4096 *
1073741824 0 0 0

Huge page sizes with configured pools:

The /proc/sys/vm/min_free_kbytes of 67584 is too small. To maximiuse efficiency of fragmentation avoidance, there should be at least one huge page free per zone in the system which minimally requires a min_free_kbytes value of 112640

A /proc/sys/kernel/shmmax value of 17179869184 bytes may be sub-optimal. To maximise shared memory usage, this should be set to the size of the largest shared memory segment size you want to be able to use. Alternatively, set it to a size matching the maximum possible allocation size of all huge pages. This can be done automatically, using the –set-recommended-shmmax option.

The recommended shmmax for your currently allocated huge pages is 8589934592 bytes.
To make shmmax settings persistent, add the following line to /etc/sysctl.conf:
kernel.shmmax = 8589934592

To make your hugetlb_shm_group settings persistent, add the following line to /etc/sysctl.conf:
vm.hugetlb_shm_group = 129

Note: Permanent swap space should be preferred when dynamic huge page pools are used.

Note the sub-optimal shmmax value. We fix it permanently by editing /etc/sysctl.conf:
xed admin:///etc/sysctl.conf

and adding the following lines:
kernel.shmmax = 8589934592
vm.hugetlb_shm_group = 129
min_free_kbytes = 112640

Note 1: Use the values recommended by hugeadm –explain!!!

Regarding vm.hugetlb_shm_group = 129: “129” is the GID of the group kvm. Check with:
getent group kvm

Run sudo sysctl -p to put the new settings into effect. Then edit the /etc/fstab file to configure the hugepages mount point with permissions  and group ID (GID):
xed admin:///etc/fstab

Add the following line to the end of the file and save:
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=129 0 0

It’s best to add your user to the kvm group (GID 129), so you’ll have permissions to access the hugepages:
usermod -a -G kvm user
where “user” is your user name.

Check the results with hugeadm --explain.

Now we need to edit the script file that contains the qemu command and add the following lines under -m 8G \:
-mem-path /dev/hugepages \
-mem-prealloc \

Reboot the PC for the fstab changes to take effect.

Turn on MSI Message Signaled Interrupts in your VM

Developer Alex Williamson argues that MSI Message Signaled Interrupts may provide a more efficient way to handle interrupts. A detailed description on how to turn on MSI in a Windows VM can be found here: Line-Based vs. Message Signaled-Based Interrupts.

Make sure to backup your entire Windows installation, or at least define a restore point for Windows.

In my case it improved sound quality (no more crackle), others have reported similar results – see these comments.

Low 2D Graphics Performance

Recent Windows updates have added protection against the Spectre vulnerability, by means of an Intel microcode update. This update has caused a significantdrop in 2D graphics performance under Windows.

The reason for the performance drop might be a bug in kvm/qemu, and the work-around described in my separate post should only be used in emergencies, if and when 2D graphics performance is essential to the applications you run.

SR-IOV and IOMMU Pass Through

Some devices support a feature called SR-IOV or Single Root Input/Output Virtualisation. This allows multiple virtual machines to access PCIe hardware  using virtual functions (vf), thus improving performance. See Understanding the iommu Linux grub File Configuration.

The SR-IOV feature needs to be enabled in the BIOS, as well as the drivers. See here for an example.

In some cases performance can be further improved by adding the “pass through” option iommu=pt to the /etc/default/grub file:

GRUB_CMDLINE_LINUX_DEFAULT=”quiet intel_iommu=on iommu=pt”

Followed by sudo update-grub

Help Support this Website

If you find this information helpful, consider a contribution.


27 thoughts on “Running Windows 10 on Linux using KVM with VGA Passthrough”

  1. Hello Mathias,
    I have been traveling a lot lately, so forgive my late reply. In answer to your question: I don’t use a virsh windows10.xml file, just the script and qemu.
    Most other tutorials use libvirt and virt-manager to create and manage the VM, but I had not much luck with it.


  2. Thanks for the write-up! Still struggling through it on a Metabox (Clevo) P950ER

    I have Ubuntu 18.04 and have spent far too much time trying to disable the nouveau driver from loading.

    – kernel parameters tried:
    — modprobe.blacklist=nouveau
    — nouveau.blacklist=1
    – modprobe.d/* lines:
    — blacklist nouveau
    — options nouveau modeset=0
    — install nouveau /bin/true
    –alias nouveau off

    After all that I found there was a “nvidia-fallback.service” which tries to load nouveau if nvidia isn’t.
    One-off command to disable it:

    # systemctl disable nvidia-fallback.service


  3. Hi guys,

    I just attempted this cool project with:
    CPU: intel core i5-6600k
    MEM: 16GB DDR4
    MB: AsRock Z170 Pro4
    Vid Primary: intel IGP
    Vid VM: EVGA Geforce GTX 1060
    HD’s: A couple spinning drives kicking around
    OS: Linux Mint 19

    I was able to get the script fully functional because I want to use an entire drive instead of an image, but for some reason using /dev/sdb1 wouldn’t work no matter how it was formatted, so I adapted your tutorial to libvirt and virt-manager and was able to get Windows installed with networking and a separate keyboard/mouse transferring over. The only issue I am having is that my graphics card is throwing up a Code 43 error in windows, but I’m not sure what to look for to solve the issue and get the Nvidia drivers to actually work. Any help you can give would be greatly appreciated!



  4. Ahh this seems the right place, I think my previous post was in the wrong place. So I got windows installed in my VM. The issue I am currently having is i can get EITHER my Keyboard OR Mouse to passthru, but not both. Both devices are on Bus 002 and everytime I issue the -device command it seems to overwrite the previous -device command. How can I get them both to work at the same time? As always ANY response is greatly appreciated!

  5. @Brian:
    Make sure to follow part 3 and 4 by the letter. First identify the USB IDs of the keyboard and the mouse. They should be something similar to 045e:076c (the first part 045e is the vendor code, the second 076c is the device code). For two devices you need two such entries.
    You need to edit or create /etc/modprobe.d/local.conf and add something like:
    options vfio-pci ids=10de:13c2,10de:0fbb

    Change the IDs to reflect your own vendor and product IDs.

    Another way to pass through mouse and keyboard is by passing through a USB host controller. This is a PCI passthrough using the -device option in the qemu command. First you need to determine the USB controller:
    lspci | grep USB
    My output is:
    00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #2 (rev 05)
    00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #1 (rev 05)
    07:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    08:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    09:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
    0f:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)

    To use the USB controller at PCI bus 08:00.0, you need to do the following:
    1. Edit the start script and add underneath the vmname entry:

    vfiobind() {
    vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
    device=$(cat /sys/bus/pci/devices/$dev/device)
    if [ -e /sys/bus/pci/devices/$dev/driver ]; then
    echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
    echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

    if ps -A | grep -q $vmname; then
    echo “$vmname is already running.”
    exit 1
    cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
    vfiobind $line

    In addition, you need to add the following line to the qemu command in the script (just underneath the graphics card device):
    -device vfio-pci,host=08:00.0 \

    Save the script file.
    Then you need to create the /etc/vfio-pci.cfg and enter the following:

    There must not be any blank line, or any comment, just one single line specifying the PCI bus of the USB controller.

    The USB controller must reside within its own IOMMU group. To check, use the following command:
    for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort –version-sort

    Hope this helps.

  6. I have been able to get Win10 guest working well on LM19 host, but not without some struggle…particularly with keyboard and mouse passthrough. After spending some time with the qemu 2.11.1 manpages (and Google), I came up with a solution which works well, doesn’t require explicit bus mapping, and allows me to switch back and forth between guest and host using normal KVM window release (CTRL+ALT).

    My setup is single monitor – I use CPU graphics for host, and passthrough my Nvidia card to guest – both are wired to my monitor, and once I have launched my VM and clicked inside the window to initiate mouse and kbd capture, I can then switch inputs on my monitor to run Win10 fullscreen from the discreet card output. This would of course work equally well on multiple monitor setup, of course. Here is how I got it working –

    As you noted in your post, QEMU syntax for USB device passthrough has changed since your original posting. The -device parameter is now used to define USB device passthrough, and I use the following syntax in my script file:

    -usb \
    -device usb-kbd \
    -device usb-mouse \

    I believe that code was added to QEMU which allows for keyboard and mouse detection without having to explicitly map the bus and device strings.

    Using these keyboard and mouse passthrough options and by removing the -vga none and -nographic parameters, I am able to switch back and forth between VM and host without a KVM/USB switch…just have to change the input on my monitor.

  7. I’m still struggling with my laptop (Metabox (Clevo) P950ER) to pass through the nvidia GPU. I think the bios is in the system bios so I’ll need to do custom ACPI table things to make it work which I’m not sure anybody has succeeded with a windows guest.
    There is another direction possible for those in my position: most of the intel CPUs with built in GPUs have virtualization features which can time slice the GPU( this is called GVM-g) and the parts necessary are in kernel 4.16 and qemu 2.12 which is what I’ll be trying next…

  8. @Chris:
    -usb \
    -device usb-kbd \
    -device usb-mouse \

    Thanks for your comment. I’ve actually tried your approach and it failed for me (using a Logitech wireless multi-device mouse and keyboard). However, with a regular mouse and keyboard your script should work just fine.

  9. @Pipandwoody:

    You most likely have a laptop using Nvidia “Optimus” technology. This can be tricky, though some people actually succeeded! Follow the links found here:

    EDIT: If I remember correctly, you need to use the Seabios method, not the UEFI boot method I describe in this tutorial. There are instructions on how to do that in the links I provided.

    The vGPU support is indeed good news. HOWEVER, why would you choose the low-performance Intel IGD (integrated graphics device) inside the CPU over the more powerful discrete (Nvidia) GPU? Note that this vGPU technology only works with Intel now, not Nvidia. Nvidia has it’s own vGPU technology for its professional line of multi-user graphics cards (see But that has nothing to do with your laptop GPUs.

    Moreover, you would need to upgrade both your kernel (the current Linux Mint 19 / Ubuntu 18.04 kernel is 4.15) and qemu (2.11 as of now).

    Unless you have a good reason to try vGPU, I would try the other options I mentioned.

    1. I’m back on board now; in trying to get the Intel GVT-g stuff going figured out that I have a Coffee/Whisky/Cannon Lake processor which is not supported by the Intel linux driver and is unlikely to be in the future…

    2. Pipandwoody: “I’m back on board now; in trying to get the Intel GVT-g stuff going figured out that I have a Coffee/Whisky/Cannon Lake processor which is not supported by the Intel linux driver and is unlikely to be in the future…”

      That’s a bummer.

  10. Benchmark results – Win10/8GB/SSD/GTX1070/6770K – no overclock

    CPU: Intel Core i7-6700K – 98.6%
    GPU: Nvidia GTX 1070 – 103.4%
    SSD: Red Hat VirtIO 161GB – 147.8%
    RAM: QEMU 1x8GB – 97.8%
    MBD: QEMU Standard PC (Q35 + ICH9, 2009)

    Rating -5360.2

    Overall I am pleased with the performance…just need to eliminate the sound crackling issue and can call it done.

    1. That’s great news! Performance looks good, too.

      Regarding sound crackling: I use the following in my script:

      export QEMU_AUDIO_DRV=alsa

      Then in the qemu section:
      -soundhw hda \

      See also my tuning section
      and turn on MSI in the Windows guest. I think that should do it.

  11. Please note I updated my start script to improve the VM detection. The new code is:
    if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
    echo “VM running”
    echo “Start the VM”


    I wanted to see if ANY passthrough VM is running, as I have now two different passthrough VMs (Windows 10 and Linux Mint 19) that obviously cannot run at the same time.

  12. I followed instructions for creating a network bridge by adding br0 to the /network/interfaces file if you know what I mean. When I get to creating the VM with your script should I be changing net0 to br0? Wasn’t quite sure how that works. to set that line up in your script for the network.

    1. @David A:

      My command line is:
      -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
      -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

      Here is the output of brctl show, with no VM running:
      brctl show
      bridge name bridge id STP enabled interfaces
      bridge0 8000.c860003147df yes eno1
      virbr0 8000.52540004e1c8 yes virbr0-nic

      As you can see, you don’t use the name of the bridge you created in the qemu script.

  13. Also extremely extremely important problem in the guide!!!!

    Windows will then ask you to:
    Select the driver to install

    Click “Browse”, then select your VFIO ISO image and go to “vioscsi“, open and select your Windows version (w10 for Windows 1o), then select the “AMD64” version for 64 bit systems, click OK.

    Vioscsi should be viostor….

    When navigating the iso the correct driver is in the viostor folder

    1. Thank you David. I need to check this part. You can use either the viostor or the vioscsi drivers, but the latter may need a modified qemu command.

    2. I fixed the tutorial.

      In practice, I haven’t found any significant performance difference between the two drivers. And I still need to get the hang of the new qemu syntax, but haven’t got the time to play around with it.

  14. Hey there,

    Sadly I was not yet able to get a vm to work. I was able to follow along until part 10, but there the trouble started.

    The “-cpu host,kvm=off \” part gets me “QEMU 2.11.1 monitor – type ‘help’ for more information” in the Terminal.
    Without the -cpu part, a QEMU windows opens; with “Boot failed: could not read from CDROM (code 0003)” as an error. This happens with booth a Windows 10 and ubuntu 18.04 iso file.
    After closing the QEMU window the following messages appear in the terminal:
    “-smp: command not found”
    “-device: command not found” which is my GTX 970.

    I hope you can point me in the right direction. I really want to make this work.

    Regards SaBe3.1415

    The script:


    if ps -ef | grep qemu-system-x86_64 | grep -q multifunction=on; then
    echo “A passthrough VM is already running.” &
    exit 1


    # use pulseaudio
    export QEMU_AUDIO_DRV=pa
    export QEMU_PA_SAMPLES=8192
    export QEMU_PA_SERVER=/run/user/1000/pulse/native

    cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
    qemu-system-x86_64 \
    -name $vmname,process=$vmname \
    -machine type=q35,accel=kvm \
    -cpu host,kvm=off \
    -smp 6,sockets=1,cores=3,threads=2 \
    -m 8G \
    -balloon none \
    -rtc clock=host,base=localtime \
    -vga none \
    -nographic \
    -serial none \
    -parallel none \
    -soundhw hda \
    #-usb-host,vendorid=1b1c,productid=1b2d \
    #-usb-host,hostbus=001,hostaddr=003 \
    #-usb-host,vendorid=214e,productid=0005 \
    #-usb-host,hostbus=001,hostaddr=005 \
    -device vfio-pci,host=01:00.0,multifunction=on \
    -device vfio-pci,host=01:00.1 \
    -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
    -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
    -boot order=dc \
    -drive id=disk0,if=virtio,cache=none,format=raw,file=/home/toru/Downloads/Win.img \
    -drive file=/home/toru/Downloads/ubuntu-18.04.1-desktop-amd64.iso,index=1,media=cdrom \
    -drive file=/home/toru/Downloads/virtio-win-0.1.160.iso,index=2,media=cdrom \
    -netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
    -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

    exit 0

    OS: Linux Mint 19 Cinnamon version 3.8.9
    Linux Kernel: 4.15.0-34-generic
    CPU: Intel i7 6700k
    RAM: 16GB
    GPU for the vm: GeForce GTX 970
    GPU for Mint : GeForce 210

    1. Remove the #-usb-host,vendorid=1b1c,productid=1b2d lines from your qemu command. If I remember correctly the qemu command doesn’t allow comments within the command string.

      You cannot use the virtio Windows drivers for Ubuntu.

      It happened to me too that I got the Qemu monitor and needed to continue by selecting the image to boot from. I’m traveling and cannot check the commands or sequence, but I hope this gets you further. Make sure your GTX970 has a UEFI BIOS and that it is within its own IOMMU group. There is a separate post on that.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.