Ubuntu 24.10 (Kernel 6.11)
Intel® Core™ Ultra Series 1
For those that are interested in running Ollama open-source AI models on their Intel Ultra Series 1 Framework 13, I’ve built a simplified guide. The original instructions are a little tricky to navigate and there’s one preference that degrades performances on the series 1. So, I’ve condensed everything into simple commands a user can paste into their terminal.
In the screenshot, I’m running the model using arc graphics and getting about 30 t/s on the llama3.2 model. CPU only yields 20 t/s.
I hope this helps someone.