Linux + ROCm: January 2026 Stable Configurations Update

for me today;

on a fedora43 clean toolbox run on a silverblue-43: (so have to be the same on a clean fedora-43 install=

sudo dnf install python3-torch python3-torchaudio python3-torchvision
# [...]
$ python3 -c 'import torch' 2> /dev/null && echo 'Success' || echo 'Failure'
Success
$ python3 -c 'import torch; print(torch.cuda.is_available())'
True

So not that bad. Did you have simple test to know if it really work?

Edit:

git clone https://github.com/pytorch/examples.git
cd examples/mnist
python3 main.py

# look to work:
Train Epoch: 14 [57600/60000 (96%)]	Loss: 0.027002
Train Epoch: 14 [58240/60000 (97%)]	Loss: 0.007073
Train Epoch: 14 [58880/60000 (98%)]	Loss: 0.015837
Train Epoch: 14 [59520/60000 (99%)]	Loss: 0.009342

Test set: Average loss: 0.0287, Accuracy: 9909/10000 (99%)

I’ll post a footnote, my Framework Desktop is working stably with:

Ubuntu 25.10 (add amd_iommu=off to get a 6% speedup source)

Kernel 6.17.7 (use the mainline ppa to install it)

Python 3.12 (use uv python install 3.12 and then use the python3.12 to do python3.12 -m venv yourdir)

Rocm 7.1 (when installing torch, torchvision, torchaudio wheels in the venv)

InvokeAI from pip (well, uv) - ok

ComfyUI from github - ok, no flash-attention 2

Python torch commands - ok

Lemonade - ok

llama.cpp - ok, I downloaded the latest build from Releases Ā· ggml-org/llama.cpp Ā· GitHub - find the Linux > Ubuntu > ā€œUbuntu x64 (Vulkan)ā€ build

For anyone running Fedora 43 on Framework Desktop who just wants to know what commands get you a working PyTorch project, I created a GitHub gist: PyTorch on Framework Desktop with Fedora 43 - 2026-01-31 Ā· GitHub

Lmk if comments/improvements. Good luck and godspeed.

2 Likes

well… I think there is to much element installed.

  1. for native fedora: fedora-43 is now back stable on latest upgrade:
    fedora have build-in rocm-6.4.4 support.. :crossed_fingers:
sudo dnf install python3-torch python3-torchaudio python3-torchvision
# [...]
$ python3 -c 'import torch' 2> /dev/null && echo 'Success' || echo 'Failure'
Success
$ python3 -c 'import torch; print(torch.cuda.is_available())'
True
git clone https://github.com/pytorch/examples.git
cd examples/mnist
python3 main.py

# look to work:
Train Epoch: 14 [57600/60000 (96%)]	Loss: 0.027002
Train Epoch: 14 [58240/60000 (97%)]	Loss: 0.007073
Train Epoch: 14 [58880/60000 (98%)]	Loss: 0.015837
Train Epoch: 14 [59520/60000 (99%)]	Loss: 0.009342

Test set: Average loss: 0.0287, Accuracy: 9909/10000 (99%)

that’s all no need for rocm-repos install
In that case it use a python3.14 .

  1. newer rocm / older python:
    you can use the https://rocm.nightlies.amd.com/v2/gfx1151 like you did with your pyproject.toml… but do not install rocm from fedora they have 2 different rocm release (6.4.4 vs 7.2?) and it clash (python venv did not isolate correctly from native rocm…)

If you do not need pure romc7+ (like for llama.cpp build…) do not use Index of /rocm/el9/latest/main/ repos…
You can test toolbox with FC44 to get rocm-7.1 if you want.
Or some on the therock toolbox (or create one), but this need to add ROCM paths:

somthing like that

  • Containerfile.therock-nightly :
# build:
# from repos: https://therock-nightly-tarball.s3.amazonaws.com

# - podman build -t therock-toolbox:7.11 -f Containerfile.therock-nightly --build-arg THEROCK_VER=gfx1151-7.11.0a
# - podman build -t therock-toolbox:7.12 -f Containerfile.therock-nightly --build-arg THEROCK_VER=gfx1151-7.12.0a

# create:
# - toolbox create --image therock-toolbox:7.11 therock-devel-7.11
# - toolbox create --image therock-toolbox:7.12 therock-devel-7.12

# create:
# - toolbox enter <> ...

#=======================================================================================================
FROM registry.fedoraproject.org/fedora-toolbox:43

# update OS
# + ajouter qq element pour les builds llama.cpp
RUN dnf -y --nodocs --setopt=install_weak_deps=False \
    install \
      curl tar xz git-lfs patch \
      make gcc-c++ cmake libcurl-devel ninja-build radeontop libatomic \
 && dnf clean all && rm -rf /var/cache/dnf/*

# find & fetch the latest Linux M.N.0rc tarball (gfx1151 / gfx110X-all)
WORKDIR /tmp
#ARG THEROCK_VER=gfx1151-7.11.0a
ARG THEROCK_VER=gfx1151-7.12.0a

RUN set -euo pipefail; \
    BASE="https://therock-nightly-tarball.s3.amazonaws.com"; \
    PREFIX="therock-dist-linux-${THEROCK_VER}"; \
    KEY="$(curl -s "${BASE}?list-type=2&prefix=${PREFIX}" \
      | grep -o "therock-dist-linux-${THEROCK_VER}[0-9]\{8\}\.tar\.gz" \
      | sort | tail -n1)"; \
    echo "Latest tarball: ${KEY}"; \
    curl -L --fail -o therock.tar.gz "${BASE}/${KEY}" \
    && mkdir -p /opt/rocm \
    && tar xzf therock.tar.gz -C /opt/rocm --strip-components=1 \
    && rm therock.tar.gz

# les env pour rocm
ENV ROCM_PATH=/opt/rocm \
    HIP_PLATFORM=amd \
    HIP_PATH=/opt/rocm \
    HIP_CLANG_PATH=/opt/rocm/llvm/bin \
    HIP_INCLUDE_PATH=/opt/rocm/include \
    HIP_LIB_PATH=/opt/rocm/lib \
    HIP_DEVICE_LIB_PATH=/opt/rocm/lib/llvm/amdgcn/bitcode \
    PATH=/opt/rocm/bin:/opt/rocm/llvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
    LD_LIBRARY_PATH=/opt/rocm/lib:/opt/rocm/lib64:/opt/rocm/llvm/lib \
    LIBRARY_PATH=/opt/rocm/lib:/opt/rocm/lib64 \
    CPATH=/opt/rocm/include \
    PKG_CONFIG_PATH=/opt/rocm/lib/pkgconfig
    
# good for llama.cpp hip build for APU:
ENV GGML_CUDA_ENABLE_UNIFIED_MEMORY=ON

# make /usr/local libs visible without touching env
RUN echo "/usr/local/lib"  > /etc/ld.so.conf.d/local.conf \
 && echo "/usr/local/lib64" >> /etc/ld.so.conf.d/local.conf \
 && ldconfig

Note: only rocm, not python/pytorch for this one.

And if some want to test someting on fedora43 advise to use toolbox, it won’t break you main OS, and have full OS isolation.

1 Like

for vllm there is a new build:

# in a venv ;)
pip install vllm==0.14.0+rocm700 --extra-index-url https://wheels.vllm.ai/rocm/0.14.0/rocm700

Note: did not test, I never have use vllm so not sure to know if it work :wink:

:crossed_fingers:

What is the specific working configuration recommended by the video?

And while we’re at it, what’s the TLDR-equivalent acronym for ā€œnope, not gonna watch a videoā€?

I make videos, but I also publish ALL documentation on github and there’s even a landing page dedicated to the toolboxes and the known working configurations:

3 Likes