Hi,
I would like to buy the Framework Laptop 13 with Ryzen™ AI 300 Series. I plan to install either Ubuntu or Fedora, and another important consideration is running an LLM locally (at most a 14B model). Do you have any benchmarks for various models like Gemma 3, the Llama series, or DeepSeek running on the Ryzen AI 300 series, so I can pick the right processor and RAM? What is the recommended software stack for running these LLMs locally on Linux?
Where are we stuck actually? Is it the Linux kernel or AMD driver or a lack of support in inference engines like llama.cpp ? Does Rocm support NPUs & iGPUs or hybrid model? Why did Framework choose Ryzen AI series processors over Intel core ultra series?
Quite surprised to know that AMD is favouring only windows now with its AI series processors using tools like Gaia. Hope that they will expand the support to GNU/Linux systems very soon. They should have done it in the other way since most of the universities, schools, enterprises etc. that I know or work with use GNU/Linux for running servers and bau applications.
Linux now supports the amdxdna driver. However, popular inference engines like llama.cpp currently lack integration. Ideally, these applications and libraries should begin leveraging the amdxdna driver, either directly or through appropriate abstractions.
I don’t know about rocm.
AMD SoC are years ahead of intel equivalent. I don’t about framework choice.
To help strengthen Linux support for XDNA, consider upvoting relevant GitHub issues. This will signal demand and could encourage AMD to prioritize support more quickly