Ollama Error: 500 Internal Server Error

I did a clean install of my Framework desktop with Fedora 43, then I installed ollama using the command listed on their website.

  • Linux framework 6.18.6-200.fc43.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Jan 18 18:57:00 UTC 2026 x86_64 GNU/Linux
  • ollama version is 0.15.1
  • 96 GB allocated to GPU (via bios, not kernel parameters).

But when running ollama even with the smallest model, I get the following error:

>ollama run llama3.2:3b
Error: 500 Internal Server Error: do load request: Post “``http://127.0.0.1:44067/load``”: EOF

When I look at the journal output it seams that ROCm gives an error out of memory:

Jan 26 15:48:12 framework systemd[1]: Stopping ollama.service - Ollama Service…
Jan 26 15:48:12 framework systemd[1]: ollama.service: Deactivated successfully.
Jan 26 15:48:12 framework systemd[1]: Stopped ollama.service - Ollama Service.
Jan 26 15:48:12 framework systemd[1]: ollama.service: Consumed 7.880s CPU time, 2.3G memory peak.
Jan 26 15:48:12 framework systemd[1]: Started ollama.service - Ollama Service.
Jan 26 15:48:12 framework ollama[30932]: time=2026-01-26T15:48:12.996+01:00 level=INFO source=routes.go:1631 msg=“server config” env=“map[CUDA_VISIBLE_DEVICES: GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISI>
Jan 26 15:48:12 framework ollama[30932]: time=2026-01-26T15:48:12.997+01:00 level=INFO source=images.go:473 msg=“total blobs: 6”
Jan 26 15:48:12 framework ollama[30932]: time=2026-01-26T15:48:12.997+01:00 level=INFO source=images.go:480 msg=“total unused blobs removed: 0”
Jan 26 15:48:12 framework ollama[30932]: time=2026-01-26T15:48:12.997+01:00 level=INFO source=routes.go:1684 msg=“Listening on 127.0.0.1:11434 (version 0.15.1)”
Jan 26 15:48:12 framework ollama[30932]: time=2026-01-26T15:48:12.997+01:00 level=INFO source=runner.go:67 msg=“discovering available GPUs…”
Jan 26 15:48:12 framework ollama[30932]: time=2026-01-26T15:48:12.998+01:00 level=INFO source=server.go:429 msg=“starting runner” cmd=”/usr/local/bin/ollama runner --ollama-engine --port 44607"
Jan 26 15:48:13 framework ollama[30932]: time=2026-01-26T15:48:13.013+01:00 level=INFO source=server.go:429 msg=“starting runner” cmd=“/usr/local/bin/ollama runner --ollama-engine --port 34633”
Jan 26 15:48:13 framework ollama[30932]: time=2026-01-26T15:48:13.028+01:00 level=INFO source=server.go:429 msg=“starting runner” cmd=“/usr/local/bin/ollama runner --ollama-engine --port 42465”
Jan 26 15:48:13 framework ollama[30932]: time=2026-01-26T15:48:13.863+01:00 level=INFO source=runner.go:106 msg=“experimental Vulkan support disabled. To enable, set OLLAMA_VULKAN=1”
Jan 26 15:48:13 framework ollama[30932]: time=2026-01-26T15:48:13.864+01:00 level=INFO source=server.go:429 msg=“starting runner” cmd=“/usr/local/bin/ollama runner --ollama-engine --port 33171”
Jan 26 15:48:14 framework ollama[30932]: time=2026-01-26T15:48:14.503+01:00 level=INFO source=types.go:42 msg=“inference compute” id=0 filter_id=0 library=ROCm compute=gfx1151 name=ROCm0 description=“AMD Radeon Gr>
Jan 26 15:48:16 framework ollama[30932]: [GIN] 2026/01/26 - 15:48:16 | 200 | 40.587µs | 127.0.0.1 | HEAD “/”
Jan 26 15:48:17 framework ollama[30932]: [GIN] 2026/01/26 - 15:48:17 | 200 | 96.062742ms | 127.0.0.1 | POST “/api/show”
Jan 26 15:48:17 framework ollama[30932]: [GIN] 2026/01/26 - 15:48:17 | 200 | 91.478991ms | 127.0.0.1 | POST “/api/show”
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.226+01:00 level=INFO source=server.go:429 msg=“starting runner” cmd=”/usr/local/bin/ollama runner --ollama-engine --port 33157"
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.897+01:00 level=INFO source=server.go:429 msg=“starting runner” cmd=“/usr/local/bin/ollama runner --ollama-engine --model /usr/share/ollama/.ollama>
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.897+01:00 level=INFO source=sched.go:452 msg=“system memory” total=“31.1 GiB” free=“26.2 GiB” free_swap=“8.0 GiB”
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.897+01:00 level=INFO source=sched.go:459 msg=“gpu memory” id=0 library=ROCm available=“109.9 GiB” free=“110.3 GiB” minimum=“457.0 MiB” overhead=“0 >
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.897+01:00 level=INFO source=server.go:755 msg=“loading model” “model layers”=29 requested=-1
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.905+01:00 level=INFO source=runner.go:1405 msg=“starting ollama engine”
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.906+01:00 level=INFO source=runner.go:1440 msg=“Server listening on 127.0.0.1:33447”
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.910+01:00 level=INFO source=runner.go:1278 msg=load request=”{Operation:fit LoraPath: Parallel:1 BatchSize:512 FlashAttention:Disabled KvSize:409>
Jan 26 15:48:17 framework ollama[30932]: time=2026-01-26T15:48:17.930+01:00 level=INFO source=ggml.go:136 msg=”" architecture=llama file_type=Q4_K_M name=“Llama 3.2 3B Instruct” description=“” num_tensors=255 num_>
Jan 26 15:48:17 framework ollama[30932]: load_backend: loaded CPU backend from /usr/local/lib/ollama/libggml-cpu-icelake.so
Jan 26 15:48:17 framework ollama[30932]: /opt/amdgpu/share/libdrm/amdgpu.ids: No such file or directory
Jan 26 15:48:18 framework ollama[30932]: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
Jan 26 15:48:18 framework ollama[30932]: ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
Jan 26 15:48:18 framework ollama[30932]: ggml_cuda_init: found 1 ROCm devices:
Jan 26 15:48:18 framework ollama[30932]: Device 0: AMD Radeon Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32, ID: 0
Jan 26 15:48:18 framework ollama[30932]: load_backend: loaded ROCm backend from /usr/local/lib/ollama/rocm/libggml-hip.so
Jan 26 15:48:18 framework ollama[30932]: time=2026-01-26T15:48:18.498+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.>
Jan 26 15:48:18 framework ollama[30932]: ROCm error: out of memory
Jan 26 15:48:18 framework ollama[30932]: current device: 0, in function stream at //ml/backend/ggml/ggml/src/ggml-cuda/common.cuh:1248
Jan 26 15:48:18 framework ollama[30932]: hipStreamCreateWithFlags(&streams[device][stream], 0x01)
Jan 26 15:48:18 framework ollama[30932]: //ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu:94: ROCm error

Before the reinstall of Fedora 43, it was working perfectly, anybody have a clue what is going on?

Install ollama from fedora repos instead.

I will try that, I will let you know if that solves it.

@Mario_Limonciello : That fixed it :slight_smile: ! Thanks for the tip.