I’m only listing the distros that are currently listed as supported by Framework for the Desktop, and only Linux. If you have further info about why you chose what you did and/or pros & cons of each that’d be great. Feel free to comment with other OS’ but I’m mostly interested in comparing these three.
Why the poll? I was reading the blog article “Getting started with the basics of LLMs” which said “Note that as of December 2025, inference runs about 20% faster on Fedora 43 than on Windows 11.“ I’ve usually preferred Debian based distros, and this had me considering Fedora again, so I looked at the “Fedora 43 Installation Guide” for the Desktop.
There it reminded me that Fedora “follows a fairly aggressive update policy on new kernels. This means that if you have the most recent generation of hardware, there is a higher risk that a kernel update could have a driver regression.” I then remembered that can be a pain. It said “To avoid this risk altogether, you can use a more conservative distro like Ubuntu LTS.” However, the only Ubuntu version listed as supported (for the desktop) is 25.10, which is not LTS…
Not being a gamer, I was not familiar with Bazzite, but after looking into Bazzite, it seems its rollbacks and the way it does things might mitigate those risks of Fedora (even though Bazzite is based on Fedora), and might be better for local LLMs and other AI.
So… what do you think? Which of these 3 is best for local LLMs and other AI on the FW Desktop AMD Ryzen™ AI Max+ 395 - 128GB (or whichever desktop you’re using) while minimizing the potential breakage factor of the aggressive updates (or having an easy way to rollback in case of said breakage)?
The best tool is the one you know how to use, to get the desired result.
Ubuntu, Fedora, and Bazzite are all “fine”–if they let you do your work. The asterisk–is if squeezing every last token/second of performance out is desired. If AI is your work, and tokens/second equals your paycheck, then a more stripped down more user-centric advanced Linux is likely warranted.
The default model in LMStudio I get 65 tokens/second on Bazzite (which I use because I get home from work and play games and also tinker in LMStudio)–I’ve read people getting 70 on Cachy or Arch. Does and extra 17% performance per query actually matter, to you?
I’ll add some info here too, but not vote since my OS isn’t in the list. I went with Endeavour OS, its Arch based BTW. I chose that because I use it as my daily driver on my other computers and the AUR is indispensable to me.
So far I have not had any issues with the OS itself and the hardware. Everything boots fine, all hardware is seen, and the full RAM pool is shown in tools like BTop and Free. I do have issues with ROCM and Ollama specifically. When trying to run Ollama with ROCM support the GPU is not seen and Ollama fails back to CPU only mode. It works fine when running with Vulkan support though. I think this is more of a failure with Ollama and ROCM rather than the hardware or OS.
Tactical Finesse has the right idea too, use what you know. Practically every Linux will work with this machine and AI in general. Do you really need the most top tier experience, or can you handle a good enough experience and be familiar with the environment?
So this is day 2 of my Framework and I took the plunge with Fedora43 after only ever having used Ubuntu. Happy so far but installing Torch as I write this so lets see how it goes!