Starting with ROCm 6.4.4, AMD has expanded (preview) support for PyTorch on Ryzen APUs.
For folks not yet on the latest ROCm (e.g., if you’re running Bluefin GTS, my distro of choice), AMD also publish rocm/pytorch Docker images that fully utilize the iGPU:
$ docker container exec -it beautiful_johnson /bin/bash
ubuntu@3c37a46b0878:/machine-learning-with-python$ source /opt/venv/bin/activate
(venv) ubuntu@3c37a46b0878:/machine-learning-with-python$ python
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>>
Make sure you grab an image for the exact PyTorch version you want here. You could use the image directly, of course. Preferably, you’d create a devcontainer based off your chosen rocm/pytorch image for improved tooling integration.
Finally, LM Studio works perfectly as well (via Vulkan llama.cpp v1.52.0 v1.50.2 runtime). I have it installed via their official AppImage (0.3.27 Build 4). Using LM Studio, I’m getting 20-23 tok/sec (impressive!) using the openai/gpt-oss-20b model.
Here’s my system config:
Model: Laptop 13 (AMD Ryzen AI 300 Series) (A9)
CPU: AMD Ryzen AI 9 HX 370 (24) @ 5.16 GHz
GPU: AMD Radeon 890M [Integrated]
Memory: 96 GB; 32 GB dedicated to iGPU in BIOS
Distro: Bluefin-dx:gts (Version: gts-41.20250928)
Kernel: Linux 6.15.10-100.fc41.x86_64
So, if you need AI chops in a FwL 13 chassis, I’d not wait on the basis of “non-existent” local AI support because it’s perfectly usable for that use case right now, and it’s only going to improve from here on out. That said, this support was not available out the gate which is a concerning path many vendors seem to be taking (AMD and Apple come to mind).
I hope this helps.