after trying for 6h now, i am at my wit’s end and would like to have some help or pointers what i did wrong…
I am fairly frustrated at this point
What i did so far
I setup Automatic1111 Stable-Diffusions Webui and got it up and running with the CPU but i got the dGPU so obviously i wanna utilise that. So i followed the guide inside the github wiki only to find out that does not work anymore since Arch went to python 3.12 in their official packages and that brakes stuff in Stable Diffusion which i was not able to fix
=> If you got a guide for that i would happy to test it!
I added a 3.10.6 (the current active version of SD) via pyenv and setup a virtualenv. After some deleting, reinstalling i finally got a “working” version which these commands inside the git repo.
Do not test strable-diffusion recently… and do not have the dGPU
For the Framework16 with dGPU the most dificule is to target/use the good one. It use to be not possible to use AMD GPU with an APU with ROCM… I think it best now… but try to use ROCM 6.1 at least https://pytorch.org/ => pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1
Oh yeah, i tried the latest version 6.1 but also 5.7 and 5.6 of ROCM.
Only 5.7 seems to work or even boot up StableDiffusion. The other versions just plainly crash with an segmentation fault or error message about wrong instruction architecture.
Might be possible to find some Vulkan backend?
I’ve had good luck with those as opposed to fiddling with getting Intel’s OpenVINO to work for llama.cpp!
ROCM was an utter pain to deal with on my 5700XT, I pity you for having to have went through the same experience.
With HSA_OVERRIDE_GFX_VERSION set to “11.0.0” I have not seen any “wrong instruction architecture” messages, with any version of ROCm. But, in my case, there’s no potential dGPU/APU confusion. Maybe that’s what is happening on your FW16?
I had much better experience with InvokeAI and ROCm 5.6. Maybe give that a try?
On the CPU or APU of my FW16 i got it also working, but thats not a solution compared to using a dedicated GPU.
The override you suggets works as to the point it “generates” a grey picture all the time… unrelated to the currently loaded… so this would be a solution if you give me a hint as to avoid the grey picture results.
I did give invokeAI a try now, it loads the model but crashes during the process with no error message. This is kinda disappointing for me but i think i will take a try in half a year again. Maybe by then some new version fixes all the problems i have
Sorry it’s been disappointing so far. I don’t have FW16, but I do think people have been successful with running SD on it. If you haven’t seen it, take a look at this guide by @cepth, for example: Installing ROCm / HIPLIB on Ubuntu 22.04 - #2 by cepth