Anybody tried running image generation (e.g. Stable Diffusion XL, 3.5 or similar) on Linux?

I’ve only tried to see if things run and to observe system behavior (mainly the dreaded PSU noise) under load in this scenario, no testing on performance and optimizations. I use a pretty standard Arch setup. BTW.

I only use ComfyUI and it ran my SDXL and Flux test workloads without crashing. I’ll have to dig deeper in my spare time and run complex workloads to make use of the memory. I have little experience with other frontends, I don’t think I’ll be installing those. For Linux with ROCm installation is a little bit more technical than installing a Windows with Nvidia package, but you’ll end up with a neat python virtual environment. You’ll need python, pip and git.

Make a venv and activate it.

python -m venv .comfy-venv
source .comfy-venv/bin/activate

Install latest ROCm and torch packages from TheRock repo releases for gfx1151 target.

ROCm

pip install \`
  `--index-url https://rocm.nightlies.amd.com/v2/gfx1151/ \`
  `"rocm[libraries,devel]"

torch

  pip install \`
  `--index-url https://rocm.nightlies.amd.com/v2/gfx1151/ \`
  `--pre torch torchaudio torchvision

Check torch installation

python -c 'import torch; print(torch.cuda.is_available())'

Should return: True

Install ComfyUI using manual installation method. Docs

Clone the repo

git clone https://github.com/comfyanonymous/ComfyUI.git

GPU dependencies should already be installed from TheRock repo.
Move into ComfyUI directory and install other dependencies

cd ComfyUI
pip install -r requirements.txt

Run ComfyUI

python main.py

Server should start now, check the terminal output for available memory, pytorch version (should be a rocm one from TheRock repo, for example 2.10.0a0+rocm7.10.0a20251015), AMD arch should be gfx1151 and installed ROCm version should be reported. Device should be your Strix Halo GPU. WebUI should now be available.

You may start with an example workload, they will prompt you to download a correct model, and usually have helpful notes for working with a model architecture. Run one.

You can check if a process is using the GPU with amd-smi command. In order to use it the user has to be a part of video and render groups.

Stop the server with Ctrl+C
Exit the venv with

deactivate

If you’d like to run ComfyUI again activate the venv and run main.py.

Have fun!