What's the most user friendly

Ok so, a lot of issues here. But first. I’m using the 128gb 395 version. Currently running bazzite off a 250gb extension….got a 4tb incoming fast

So far bazzite is cool :smiling_face_with_sunglasses:. Still new to Linux so having a trouble with AI. Lm studio is working mostly ok. Having trouble understanding basic things with ram and vram, and what’s the best bios setting. Some say 96gb and other say 512mb.

My issue with lm studio is some of the chat bot run slow. Seem like it don’t matter where my ram is set too, and learn not all ai bots let me use or have the CPU experiment option. Ai chats claims my 128 is more than enough but I’m having doubts as I’ve already had a few failures.

Then there is the AI image and video generation software. Getting absolutely no where here. My last attempt ai told me SDA1111 would be the easiest but I kept getting held back by a error with pytorch? Not the right version I think, I have 3.14 but needed 3.10 or 3.11. no matter what the AI chat gave me I kept running into the same error and felt like I was in a loop. YT don’t help for Linux, at least not as far as I seen.

My current nail in the coffin was the bot telling me that distro box was likely need for me to install SDA1111 on my bazzite os. Now from my point of view, this was very annoying to read. The way I read was. Bazzite terminal is no good and ya should of picked a different distro.

So, my original question being, what’s the most user friendly distro for easy AI generation install. Or would it be easier for me to just try distrobox. I just did some research on it and from what I gathered I my IQ drop by 10%.

On Linux you don’t need to set the VRAM reserve. Linux is smart enough to manage it automatically.

LLM performance entirely depends on:

  1. How you configured your LLM client: e.g. you need to use Vulkan to use the GPU

  2. What exact model you are running. Some perform faster than others, some give better output than others for certain tasks.

Go read about configuring LM Studio on their help pages–it is a popular enough client there’s lots of stuff on most topics. I suspect you issue is like you said, you’re running on the CPU, not the GPU–which isn’t a Linux or Framework problem, but an LM Studio problem.

I’m using Fedora (which Bazzite is based on) currently and everything worked easily out of the box. I set the VRAM amout to 512MB in BIOS and changed settings of how much VRAM the system sees based on Jeff Geerling’s instructions (I have it set to 108GB of VRAM). LMStudio worked completely fine but I switched to pure llama.cpp because I needed RPC functionality which LMStudio doesn’t provide

As far as the slow model performance goes, I noticed that any llama-based models run extremely slow, no matter the size, quantization or llama.cpp settings I set. On contrary, qwen-based models as well as GLM models work great. I’m guessing whatever architecture llama models have internally are more geared towards Nvidia-based systems (not sure if this is correct but I don’t have an Nvidia GPU to confirm, unfortunately)