I’ve used ComfyUI with PyTorch running on AMD’s ROCm AI framework on my desktop using Windows Subsystem for Linux (WSL) with an AMD 7900XTX dedicated GPU successfully, and I was curious to see how a laptop APU designed for AI workloads would compare. Sadly, I can’t get PyTorch to work with the Framework Laptop 13 AMD Ryzen AI 9 HX 370 with Radeon 890M with 96 GB of system memory.
It turns out, AMD AMD ROCm does not support the Radeon 890M. In fact, when support was requested, AMD pointed users to third-party patches! So, if you were hoping to use your new AMD Ryzen AI 300 Series laptop with PyTorch, it’s not going to work. AMD’s marketing is being misleading here. If you are going to call something the “AI series” it should work with your own AI framework.
rocminfo
WSL environment detected.
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
Runtime Ext Version: 1.6
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen AI 9 HX 370 w/ Radeon 890M
Uuid: CPU-XX
Marketing Name: AMD Ryzen AI 9 HX 370 w/ Radeon 890M
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 49152(0xc000) KB
Chip ID: 0(0x0)
Cacheline Size: 64(0x40)
Internal Node ID: 0
Compute Unit: 24
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
Memory Properties:
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 48965100(0x2eb25ec) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 48965100(0x2eb25ec) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 48965100(0x2eb25ec) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 4
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 48965100(0x2eb25ec) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*** Done ***
So, ROCm is not detecting the Radeon 890M GPU as a GPU agent for ROCm, just the CPU as a CPU agent. This means that the GPU is not usable in ComfyAI, or other AI apps that use ROCm (via PyTorch).
python3 -c 'import torch; print(torch.cuda.is_available())'
False
~/a/ComfyUI (master)> python main.py
Checkpoint files will always be loaded safely.
Traceback (most recent call last):
File "/home/sean/ai/ComfyUI/main.py", line 137, in <module>
import execution
File "/home/sean/ai/ComfyUI/execution.py", line 13, in <module>
import nodes
File "/home/sean/ai/ComfyUI/nodes.py", line 22, in <module>
import comfy.diffusers_load
File "/home/sean/ai/ComfyUI/comfy/diffusers_load.py", line 3, in <module>
import comfy.sd
File "/home/sean/ai/ComfyUI/comfy/sd.py", line 7, in <module>
from comfy import model_management
File "/home/sean/ai/ComfyUI/comfy/model_management.py", line 221, in <module>
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
^^^^^^^^^^^^^^^^^^
File "/home/sean/ai/ComfyUI/comfy/model_management.py", line 172, in get_torch_device
return torch.device(torch.cuda.current_device())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sean/ai/ComfyUI/.venv/lib/python3.12/site-packages/torch/cuda/__init__.py", line 1026, in current_device
_lazy_init()
File "/home/sean/ai/ComfyUI/.venv/lib/python3.12/site-packages/torch/cuda/__init__.py", line 372, in _lazy_init
torch._C._cuda_init()
RuntimeError: No HIP GPUs are available