Arch Stable-Diffusion Setup Help

Hi Guys,

after trying for 6h now, i am at my wit’s end and would like to have some help or pointers what i did wrong…
I am fairly frustrated at this point :smiling_face_with_tear:

What i did so far

  1. I setup Automatic1111 Stable-Diffusions Webui and got it up and running with the CPU but i got the dGPU so obviously i wanna utilise that. So i followed the guide inside the github wiki only to find out that does not work anymore since Arch went to python 3.12 in their official packages and that brakes stuff in Stable Diffusion which i was not able to fix
    => If you got a guide for that i would happy to test it!
  2. I added a 3.10.6 (the current active version of SD) via pyenv and setup a virtualenv. After some deleting, reinstalling i finally got a “working” version which these commands inside the git repo.
pyenv install 3.10.6
pyenv virtualenv 3.10.6 webui
pyenv local webui
pip install wheel
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7
export HSA_OVERRIDE_GFX_VERSION=11.0.0
./webui.sh
  1. I tried different models from huggingface and civitai and all results were gray images…
  2. I tried adding/toggling some options like with each other in different configurations.
  • –precision full
  • –no-half
  • –upcast-sampling
  • –opt-sub-quad-attention
  • –no-half-vae
  • –medvram

The result was the same grey image or an error message

torch.cuda.OutOfMemoryError: HIP out of memory

Does anyone has anyone pointer or tips?>
Would be greatly appreciated

1 Like

Do not test strable-diffusion recently… and do not have the dGPU
For the Framework16 with dGPU the most dificule is to target/use the good one. It use to be not possible to use AMD GPU with an APU with ROCM… I think it best now… but try to use ROCM 6.1 at least
https://pytorch.org/ => pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1

Oh yeah, i tried the latest version 6.1 but also 5.7 and 5.6 of ROCM.
Only 5.7 seems to work or even boot up StableDiffusion. The other versions just plainly crash with an segmentation fault or error message about wrong instruction architecture.

Have not try this (use llama.cpp :wink: ):

May be simple that make torch working…

Might be possible to find some Vulkan backend?
I’ve had good luck with those as opposed to fiddling with getting Intel’s OpenVINO to work for llama.cpp!
ROCM was an utter pain to deal with on my 5700XT, I pity you for having to have went through the same experience.