I am trying to passthrough the iGPU from my Framework 13 (AMD) to a Windows VM.
The host system is running EndeavourOS and I already set up the system as described in some single GPU passthrough tutorials but the Windows VM is not starting. The screen gets dark and stays dark.
It seems that the amdgpu driver cannot get unbound.
Has anyone already tested this scenario?
I am following this tutorial for single GPU passthrough (not a tutorial for Framework laptops specifically): https://www.youtube.com/watch?v=eTWf5D092VY
and some lines are missing for me at 6:49 in the step where you do lsmod | grep kvm. I’m missing the bottom row with irqbypass.
(The tutorial uses an intel CPU but explains how to do this for AMD)
I was looking through the BIOS trying to find settings related to virtualization and couldn’t find anything, which I assume is the reason for this difference.
Continuing the tutorial despite that, I get an error at 12:02 when trying to change the Spice Server to a VNC server because chardev ‘spicevmc’ isn’t supported without spice graphics.
I can passthrough my 780m without manually passing any vbios dumps. Here are the things that made it work. Tested on Fedora 42 with an Alpine Linux guest:
The VM must be configured to boot as UEFI (I’ve created it with –-boot uefi,loader_secure=no
The host must shutdown its desktop environment, unload the amdgpu module and disable the GPU’s cold state
The guest must load the amdgpu module during boot and have the amdgpu firmware package installed
The guest must have no display/video/sound/audio device entries
To automate step 2, create /etc/libvirt/hooks/qemu, make it executable and paste this:
#!/bin/sh
guest="$1"
hook="$2"
state="$3"
case "$guest" in
*_AMDGPU_PASSTHROUGH);;
*) exit 0;;
esac
if test "$hook" = 'prepare' && test "$state" = 'begin'; then
echo 0 > /sys/bus/pci/devices/0000:c1:00.0/d3cold_allowed
systemctl stop display-manager
modprobe -r amdgpu
elif test "$hook" = 'release' && test "$state" = 'end'; then
modprobe amdgpu
systemctl start display-manager
fi
Every VM with a name ending with _AMDGPU_PASSTHROUGH, e.g. gaming_AMDGPU_PASSTHROUGH, will start outside of your desktop environment. If you shutdown this VM, it will start your desktop environment again.
These two PCI devices are attached to my VM:
0000:c1:00.0 - Phoenix1 iGPU
0000:c3:00.4 - USB4 port connected to a dock, which has an external monitor, mouse and keyboard
I’m not passing any framework sound devices to this VM, because my external dock has its own audio.
When booting the VM, the screens stay black for ~20 seconds until the guest shows up. Logs can be obtained via sudo cat /var/log/libvirt/qemu/gaming_AMDGPU_PASSTHROUGH.log.
The only thing which does not work yet is setting slow and fast limits on the host via ryzen_smu. Passing the iGPU seems to reset its state, so I’m looking for a solution right now. Without this I can’t really measure and optimize. I’m not using iommu=pt right now, but will try it out when I can get smu limits working.
I’ve fixed all my problems and am able to get VM performance indistinguishable from native. My ryzen_smu issues were caused by suboptimal airflow + roomtemps, which prevented the APU from rebounding to high wattage. I’ve also tested iommu=pt, but that had zero impact on my system.
What closed the performance gap between VM and native was:
Pinning CPU cores - this gets rid of 1% lows and frequent spikes in frame timings
Setting my monitors resolution to my games resolution before launching it. I don’t know why scaling costs so much, but it gave me +7% fps
Here my CPU setup, where I assign the last 4 cores (8 threads) of my 7840u to the VM: