AMD ROCm does not support the AMD Ryzen AI 300 Series GPUs

While I have the chance and you have the HX370, can you tell me which options the BIOS provides for setting the share of main memory assigned to the iGPU? Can you set the amount in GB or is it like 25%, 50% and so on?

ROCm is officially supported only on a handful of Linux distros. If you happen to be running a supported distro with the right versions of other dependencies, and can procure & install everything that the Docker image bundles, I don’t see any reason why it shouldn’t work on the main system directly.

That said, I probably still will run it through a container to keep my main system pristine (if only I cared as much about my work desk :sweat_smile:).

1 Like

Off the top of my head, there’s an Auto, Medium and High option in the BIOS that allocates 0.5G, 16G, and 32G respectively to the iGPU. The mapping of A/M/H to numeric capacities is likely dependent on how much system memory you got. I have 96 GB of system memory and my High BIOS setting allocates 32 GB to the iGPU. My understanding is that this amount is reserved as VRAM for the iGPU, but the system can go beyond that limit and use add’l system memory for the iGPU as needed.

1 Like

FWIW you’re not limited to those knobs when it comes to APU. Set the vram as small as possible and turn up the TTM page pool and apps will use that instead.

There is an explanation how to do this here:

1 Like

I am not a fan of developing inside of containers, but the fact that I can start a jupyerlab server inside the container and expose it on the host makes is much more usable for me.

Another day, another TIL; thank you!

In BIOS, I ended up selecting the Minimum (0.5 GB) option (incorrectly referred to as Auto in a previous post), and set TTM to 64 GB (amd-ttm --set 64). However, it seems the max my 96 GB system would allow for TTM is 46.79 GB:

Local/opt/amd-debug-tools via 🐍 v3.13.7 (.venv)
❯ cat /etc/modprobe.d/ttm.conf
options ttm pages_limit=16777216

Local/opt/amd-debug-tools via 🐍 v3.13.7 (.venv)
❯ amd-ttm
💻 Current TTM pages limit: 12265438 pages (46.79 GB)
💻 Total system memory: 93.58 GB

This is ~50% of the system memory. Curiously, that’s the default TTM config anyways. So, it’d seem that installing amd-debug-tools and setting TTM explicitly would be useful only if you wanted to set it to less that 50% of your system memory.

At any rate, this is absolutely amazing because I get to keep all my system memory and LM Studio is still able to use what it needs with GPU accel effectively working the same as before (with 32 GB VRAM allocation). The cool thing is that, unlike pre-allocated VRAM, I get that memory back once I eject a loaded LLM model.

As an important side note, GPU accel in LM Studio seems to be broken with Vulkan llama.cpp v1.52.0. So I’m running models with Vulkan llama.cpp v1.50.2 which is working perfectly as noted before.

Once you discover devcontainers, there’s no going back :sweat_smile:.

I would say go into bios and change it to smallest size possible, then you should be able to set TTM higher.

The BIOS option is currently set to Minimum which allocates 0.5 GB (512 MB) to the iGPU. That’s the smallest allocation possible.

Hi,
I have made some more progress in this thread:

Specifically, with:
amdgpu.cwsr_enable=0
or a file in /etc/modprobe.d:
options amdgpu cwsr_enable=0

It seems to fix the gpu crash problems.