Can anyone help me with this? I would like to run Ollama on my Framework Laptop 13 AMD Ryzen AI 9 HX 370 with AMD Radeon 890M Graphics. OS is Bazzite 42. The command line for installation from the Ollama website doesn’t seem to work. Obviously, I am missing some details.
You’d need to give some more information than that.
This is likely what you’re looking for:
Ollama is on Fedora 42 repos, although it’s an older version that can’t run newer models such as gemma3.
The Alpaca flatpak has its own Ollama addon and it seems to be more up to date. The application crashes sometimes on my 13. But that’s what I went with for now.
Alpaca works well on my Framework 13. Did you have to install the Alpaca AMD support extension? I couldn’t find it in Bazzite’s app store (Bazaar). I am guessing that this is not needed with the AMD Ryzen AI 9 HX 370 CPU with integrated graphics, since Alpaca seems to work fine and my system monitor shows anywhere from 40-70 GB of RAM usage when running Gemma2 27B and Llama3 70B. Both of these models work, but they are a little too slow to be practical. I will try some smaller models and see how they work.
Thanks for your suggestion.
I have the 5 340, so I only tried the smallest models available (1-8B range) but they run at satisfactory speeds. With the 9 you will probably be able to do well in the 8-20B range. I hope NPU support actually happens so I don’t get stuck with an extra chiplet whose only possible application is MS recall.
I did install the AMD support extension, but saw neither noticable improvement nor a spike in iGPU utilization. So I’m guessing for that you’d need to go with @ehsanj’s answer.
The command line install wont work for you as you are running an immutable distro. You need to run it as a container. If you want to either pull a container from docker or you can use toolbox or distrobox to create the container and then install in the container using the command line instructions you are likely trying to follow. Create a docker compose file to combine openwebui with ollama and then use that. This way will allow you to interact with it in a browser, but if terminal is fine just pull the ollama container and go to town.
Thanks to everyone who contributed. Alpaca works well for me. It can be installed as a flatpack from the Bazaar in Bazzite. A simple command line is used in Terminal to install Ollama as the foundation for the Alpaca UI. Instructions are provided by Alpaca. The AMD support extension doesn’t seem to be needed with the AMD Ryzen AI CPU with integrated graphics. Per the system monitor, Alpaca is making use of up to 70 GB of RAM when some of the larger models are being used. If you have enough RAM, you can run models in the 70B range. They are slow, but if you don’t mind waiting a bit for a response, they work. Models under 20B work well with much faster response times. It’s nice not to need a discrete graphics card to run these AI models.
I think Alpaca is a good choice for a newbie to Linux/Bazzite. My total experience using Linux/Bazzite is about 2 weeks. I’m still going to keep my Windows laptop, but I rarely need to use it anymore. I prefer my LInux/Bazzite laptop.
I have a Framework desktop pre-ordered (7th batch). I’m looking forward to installing Linux/Bazzite on that PC as well.
This is an up and coming alternative: GitHub - qwersyk/Newelle: Newelle - Your Ultimate Virtual Assistant
More feature rich than Alpaca and has extensions for even more functionality. Still running small models entirely on CPU here but it’s fun for what it is
easiest solution is to use ramalama.
You can install from brew or pip. Then just run ramalama serve <modelyouwant>
and it will sort out the rest and make it available in a browser chat window/talked at by anything that supports the openai API