LLMs under Linux (LM Studio) not using GPU?

@Kai_Fricke
thats done like this:
[maintenance@toolbx ~]$ llama-server -m ./models/qwen3-coder-30B-A3B/BF16/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf -ngl 999 --no-mmap --port 8080 --host 0.0.0.0

What is Toolbox?

Toolbox is a tool that creates containerized command-line environments using Podman. It allows you to have isolated development environments without affecting your host system.

You need a full guide how to setup this Environment ?

ps.:
I just created a new topic:
AMD Strix Halo Llama.cpp Installation Guide for Fedora 42

2 Likes