Anyone using Proxmox VE?

I am keeping here my progress on this, at the moment I’ve created an ansible role which should cover everything: GitHub - paolomainardi/framework-desktop-proxmox-iac: This repository contains Ansible, Terraform and other scripts used in a lab enviroment (take it at your risk)
I am not using the warmup procedure, as at the moment at least for me, has the same defect of keep the gpu in a broken state if stopped.

1 Like

I upgraded proxmox on my machine and GPU Passthrough broke. It seems like 6.17.2-2-pve kernel is the issue. I reverted to 6.14.11-4-pve and it’s working again.

Thanks for the hint. In step five, I explicitly mention that I have extracted this vbios file (since I was confused about the vbios files going around). I did not download it from a page, and to be sure, that people do not get the wrong one, I posted the SHA256sum.

Also @AquaMorph: I can confirm that 6.17.2-2-pve is not working.

1 Like

Hey,
I am trying out your scripts to compare performance in VM vs lxc.
Few remarks:

  1. The network configuration stuff could be improved and made more intuitive:
    Per default, it uses a gateway 10.0.0.1 and an ip 10.0.0.<LXC-ID>. Some more hints how this relates to the general vmbr0 would be appreciated, or a config setup that reflects the existing configured network.
    Also, for any LXC idea >255 this creates non-valid ips.
  2. Is there any reason, why both the “old script” (option 030) exists in parallel to option 031?

Any chances that this will get in a more stable situation? I’m still hesitating to get the Framework Desktop reading the proxmox related topics, especially this reboot bug.

Would like to replace my 6 years old desktop and use it for a couple of Linux and Windows VMs .

Simply waiting for proxmox to get a more recent kernel is likely the easiest solution.

Hey @beralt

Regarding 1)

Yes, that could be improved like asking for a network-prefix or so. It uses per default 10.0.0.X thats right, if you have higher IDs just replace the prefixed ip, if you dont like it. PVE IDs start with 100 and 255 is the end of the ip octet, the recommendation however would writeout invalid ips, yes. Not sure on how to change this to the better. Open to feedback or a PR.

Regarding 2)

I created the nailed down AMD version with specific ROCm first and then generalized the script to cover both ROCm and Nvidia too. If the generalized script is final, the amd version can be removed, yes.

Has anyone seen this set of scripts?

They claim to create lxc/docker containers on proxmox v9 without problems for gpu passthru nor issues on container crashes requiring reboots…

Hoping someone WAY more knowledgeable than me did or could have a look at these - - - all the usual caveats about using scripts found laying around…

:crossed_fingers: hopeful Desktop owner.

BTW - Found their site from issue comments on the widely referenced:

1 Like

Anyone using a SATA PCIe card they can recommend?

Just got my rack-mount Framework set up, anyone here tried to run this with kernel 6.17.4-2 yet? Wondering if I need to downgrade as was the case with 6.17.2-2…

I’ve been on 6.17.4-1-pve. Pretty reliable for me.

1 Like

6.17.4-2-pve here and it’s been pretty stable too.

Hoping they drop a 6.18 kernel soon though as that seems to have some relevant updates and works with the recently released ROCm 7.2 according to the Strix Halo Toolbox repo / comments

1 Like

Great work!

I think I managed to successfully pass through the GPU to my guest. I haven’t fully tested it with an llm. Several dozen AI searches got me there prior to finding your post… perhaps trained on your post.

I’m using slightly updated versions of proxmox and Ubuntu 25.10 for the updated 6.17 kernel.

While it appears the guest is loading the APU correctly, I can’t get the guest to take over the physical monitor. Can you or anyone verify that you have the physical monitor working with the guest and pass-through?

Just wondering if there’s a specific step that I can try that would address this particular issue. I’ve added all of the boot options to grub that you post up there to turn off the frame buffer on the host. I haven’t added the firmwares, but not sure if that is needed with the newer Ubuntu 25.10 guest since it has the newer kernel.

Also going to add the grub commands to the guest as well and try the bind/unbind steps.

Monitor passthrough has worked for me. Try to set the Display to ‘none’ in Hardware tab of the VM, which also disables noVNC Console access. Also, auto-start the VM on proxmox host boot. It’s finicky though, and not a priority for me. For instance, after display sleep, it will never come back. You must prevent monitor sleep on the OS if using a desktop environment.

for anyone running proxmox on a strix halo, this custom kernel is a godsend: https://github.com/jaminmc/pve-kernel i’m running the latest 6.19.x kernel and are finally able to utilize the gpu for onnx in immich. however the latest 7.0 kernel is also told to run smoothly by at least one other person. i will try it myself soon, which should bring full NPU support with fastflowlm on linux then (everything running inside a lxc instead of vm then, no need to passtrough the whole gpu anymore to a single vm)

Using the more modern Incus/IncusOS to run llm in vm