I did not get ollama to work on openSUSE Tumbleweed with the iGPU either. llama.cpp with Vulkan on the other side worked right just fine. Rocm gave me troubles on openSUSE too, but worked on Fedora42 with the ready made container. I could not see much of an advantage with using rocm though for the LLMs I am interested in so I just sticked with Vulkan.
Don’t use ollama - they don’t support AMD properly. Just use llama.cpp directly - there are Vulkan binaries, or you can compile it yourself. Plus it will save you a lot of headache down the road.
Ollama looks attractive for beginners, but there are so many hidden gotchas (like 4096 context by default) that will just lead to frustration later. I know, I’ve been there.