Ollama on the Framework Desktop thread

What models are you running? Share your configurations! Discuss!

I managed to get it to recognize the Framework Desktop GPU and load models properly with @Pattrick_Hueper‘s fork here

Running ollama in docker on our Framework Desktop using the GPU

I’m having fun experimenting!

1 Like

not really ollama related .. but last few days i had lots of fun using comfyui and generating images with different models and settings :slight_smile:

1 Like

I’ve been mostly running gemma 27b. really happy with it.

ministral-3-14b, straight from the ollama library ( ministral-3 ) with no mods to the prompt. Been hitting it from my phone for the past week (using GitHub - JHubi1/ollama-app: A modern and easy-to-use client for Ollama ) and getting phenomenal results for question-answering and design dialogs. Works well enough that I may have to convert some of these convos into Open-WebUI chats for long-term interaction.