[GUIDE] NixOS + Blackmagic RX 580/580X eGPU inside QEMU

I’ve been trying to run a Ubuntu virtual machine inside QEMU that has a Blackmagic RX 580(X?) eGPU passed through to it as a PCIe device. It’s taken a decent few weeks to get working, so I figured I’d post a guide here.

System specs:

  • Framework 13 AMD Ryzen 5 7640U, 32 GB RAM
  • Thunderbolt 4 connection w/ included cable
  • Linux kernel 6.12 (this matters)
  • Hyprland DE

Introduction

At first, I tried to use the eGPU directly to run Steam games within NixOS. It got taken over by the amdgpu kernel driver fine at first, but I could never get it to output anything. Either no video signal at all, or a black screen that showed while I was in a TTY environment, and then disappeared once I started up Hyprland. So I gave up trying to run it in NixOS, and decided to pass it through to a VM instead. It’s more comfortable not running games directly on the host hardware, so whatever.

  • Note 1: Running games with DRI_PRIME=1 gave horrible performance and I couldn’t figure out how to fix it. I don’t think it was a PCIe bandwidth issue though. I think my iGPU was rendering the actual images and just passing the processing off to the eGPU, and there were some weird processing issues there.
  • Note 2: amdgpu.dc=0 seemed to change something and instead of having output on the iGPU, it only output via the eGPU and performance was decent. It was having trouble outputting to my 1440p monitor though, so I gave up on that as well.

Handing the GPU over to the virtio driver

So that the graphics card can be used by a virtual machine, it can’t be using the default amdgpu driver. So, after a lot of time messing around, here is the config that does just that:

let
  # RX 580 Blackmagic eGPU
  gpuIDs = [
    "1002:67df" # Graphics device ID
    "1002:aaf0" # Audio device ID

    # USB port IDs at the back of the card
    "8086:15ef"
    "1022:14ea"
    "8086:15f0"
    "1022:14ef"
  ];
in { pkgs, lib, config, ... }: {
  options.vfio.enable = with lib;
    mkEnableOption "Configure the machine for VFIO";

  config = let cfg = config.vfio;
  in {
    services.udev.extraRules = ''
      # Automatically authorize the specific Thunderbolt eGPU
      ACTION=="add", SUBSYSTEM=="thunderbolt", ATTR{vendor_name}=="Blackmagic Design", ATTR{device_name}=="eGPU RX580", ATTR{authorized}="1"
    '';
    
    boot = {
      initrd.kernelModules = [
        "kvm-amd"
        "vfio"
        "vfio_pci"
        "vfio_iommu_type1"
        "kvm"
      ];

      kernelParams = [
        # enable IOMMU
        "amd_iommu=on"
        "iommu=pt"
        "vfio-pci.disable_idle_d3=1"
        "pcie_acs_override=downstream,multifunction"
      ] ++ lib.optional cfg.enable
        # isolate the GPU
        ("vfio-pci.ids=" + lib.concatStringsSep "," gpuIDs);
    };
  };
}

Note - to have this actually do anything, you need to set vfio.enable = true.

I do it by creating a specialization that I can boot into:

        specialisation = {
            "VFIO".configuration = {
              system.nixos.tags = [ "VFIO" ];
              vfio.enable = true;
            };
          };
        }

Once you have this configuration set up and reboot, the eGPU should be taken over by the virtio driver. You can check it by running lspci -k. Look for:

64:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev c0)
        Subsystem: Blackmagic Design Device a14c
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu

If the kernel driver in use is vfio-pci, the configuration works.

Setting up the QEMU VM

I’m using Ubuntu for the guest OS, since it has so far worked the least badly. I have tried Windows, but since this graphics card has no official Windows support, you have to force drivers onto it and it’s a whole mess that kept destroying itself. Linux seems to somehow be fine(?) with just using the default kernel amdgpu driver.

You can do most steps of the VM creation as you would do normally, except for the CPU configuration. I had my host machine hard crash without even journald logs without this. I’m suspecting some thermal issue, which doesn’t really make sense, but I digress.

The following configuration worked for me. I think the primary thing that matters is the CPU core count, but not using host-passthrough may also be part of it.

Don’t start the machine here yet.

Once the machine is created, attach ONLY the PCIe graphics card device. Don’t add the USB ports or HDMI audio. I’ve found this to cause problems.

I think disabling ROM BAR also matters. It’s the option that has been in place ever since I’ve gotten the VM running stable, at least.

Before starting the machine, make sure you also have a QEMU video device attached. During setup, the eGPU output doesn’t seem to be functional.

Once the installation finishes, add the proprietary GPU drivers. This step is probably optional, but again, it works in this weird hardware configuration that I’ve found myself using. Then, reboot the machine, remove the QEMU video device, and you should now hopefully have HDMI output of the graphics card.

Sometimes, it just doesn’t output anything for some reason. When that happens, what usually fixes it for me is:

  • Attaching a video device (usually VGA type)
  • Booting into the system
  • Shutting down
  • Either removing the device or setting its type to none
  • Booting up again

As many times as it takes before it finally “catches”. Also, make sure your host computer doesn’t enter sleep mode, as that is a whole another mess.

Conclusion

I’m not completely happy with how this project has turned out so far, but I am able to play games at a much higher performance on a mostly stable system once it actually starts running. I’ll update this guide if I figure out anything else.

I hope this will be useful for at least one person out there. I don’t know how many of these Mac-focused cards are in circulation, I imagine not that many, especially with NixOS users. : P

I’ve been trying to run a Ubuntu virtual machine inside QEMU that has a Blackmagic RX 580(X?) eGPU passed through to it as a PCIe device. It’s taken a decent few weeks to get working, so I figured I’d post a guide here.

Cool that you got it working in the VM. If you’re still interested in getting the eGPU working under Hyprland in the host OS I just added support for Hyprland to my all-ways-egpu script: GitHub - ewagner12/all-ways-egpu: Configure eGPU as primary under Linux Wayland desktops
In my testing setting up the script with Method 2+3 worked for making Hyprland use the eGPU.