Hi Guys,
after trying for 6h now, i am at my wit’s end and would like to have some help or pointers what i did wrong…
I am fairly frustrated at this point
What i did so far
- I setup Automatic1111 Stable-Diffusions Webui and got it up and running with the CPU but i got the dGPU so obviously i wanna utilise that. So i followed the guide inside the github wiki only to find out that does not work anymore since Arch went to python 3.12 in their official packages and that brakes stuff in Stable Diffusion which i was not able to fix
=> If you got a guide for that i would happy to test it! - I added a 3.10.6 (the current active version of SD) via pyenv and setup a virtualenv. After some deleting, reinstalling i finally got a “working” version which these commands inside the git repo.
pyenv install 3.10.6
pyenv virtualenv 3.10.6 webui
pyenv local webui
pip install wheel
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7
export HSA_OVERRIDE_GFX_VERSION=11.0.0
./webui.sh
- I tried different models from huggingface and civitai and all results were gray images…
- I tried adding/toggling some options like with each other in different configurations.
- –precision full
- –no-half
- –upcast-sampling
- –opt-sub-quad-attention
- –no-half-vae
- –medvram
The result was the same grey image or an error message
torch.cuda.OutOfMemoryError: HIP out of memory
Does anyone has anyone pointer or tips?>
Would be greatly appreciated