LLM Benchmark (AMD 7840u)

From Framework 16:

After discovering this post, I went ahead and ran the benchmark tool: GitHub - aidatatools/ollama-benchmark: LLM Benchmark for Throughput via Ollama (Local LLMs)

But I’ve been down a rabbit hole…

I set up my vRAM for the GPU to “Gaming”, but otherwise followed this post’s recomendations for Ollama settings: Quickstart Guide: Ollama With GPU Support (No ROCM Needed)

Are y’all using ROCM? If so, is there a good guide to configure that? I can’t get any benchmark over 10 tokens/s.

Also I get this message: error! when retrieving os_version, cpu, or gpu !

-------Linux----------

error! when retrieving os_version, cpu, or gpu !
{
“mistral:7b”: “9.01”,
“phi4:14b”: “4.40”,
“gemma2:9b”: “6.69”,
“llava:7b”: “9.82”,
“llava:13b”: “5.46”,
“deepseek-r1:8b”: “7.74”,
“deepseek-r1:14b”: “4.52”,
“uuid”: “b1bd0b6b-4534-55a4-a4be-e219afe4fa76”,
“ollama_version”: “0.13.5”
}

====================

-------Linux----------
error! when retrieving os_version, cpu, or gpu !
-------Linux----------
error! when retrieving os_version, cpu, or gpu !
{
“system”: “Linux”,
“memory”: 27.200977325439453,
“cpu”: “AMD Ryzen 7 7840HS w/ Radeon 780M Graphics”,
“gpu”: “unknown”,
“os_version”: ““Fedora Linux 43 (Workstation Edition)””,
“system_name”: “Linux”,
“uuid”: “b1bd0b6b-4534-55a4-a4be-e219afe4fa76”
}