What models are you running? Share your configurations! Discuss!
I managed to get it to recognize the Framework Desktop GPU and load models properly with @Pattrick_Hueper‘s fork here
Running ollama in docker on our Framework Desktop using the GPU
I’m having fun experimenting!