|
Compiling VLLM from source on Strix Halo
|
|
7
|
214
|
November 1, 2025
|
|
Anyone using Proxmox VE?
|
|
15
|
1167
|
October 28, 2025
|
|
Will the AI Max+ 395 (128GB) be able to run gpt-oss-120b?
|
|
31
|
7520
|
October 25, 2025
|
|
Oss-gpt 120b large context stalls during llama.cpp checkpoints
|
|
20
|
265
|
October 23, 2025
|
|
AMD ROCm does not support the AMD Ryzen AI 300 Series GPUs
|
|
56
|
9032
|
October 21, 2025
|
|
Nanochat on strix halo
|
|
0
|
125
|
October 21, 2025
|
|
Framework Laptop 13 eGPU Recommendations
|
|
13
|
664
|
October 20, 2025
|
|
Ollama - Model Runner Unexpectedly Stopped (GPU Hang)
|
|
8
|
321
|
October 15, 2025
|
|
Any small inconveniences with the Framework Desktop
|
|
27
|
1356
|
October 14, 2025
|
|
AMD Strix Halo Llama.cpp Installation Guide for Fedora 42
|
|
15
|
1169
|
October 8, 2025
|
|
[SOLVED] ROCM and crashes
|
|
0
|
110
|
October 7, 2025
|
|
ComfyUI in Windows
|
|
1
|
257
|
October 4, 2025
|
|
AMD Strix Halo (Ryzen AI Max+ 395) GPU LLM Performance Tests
|
|
17
|
9022
|
September 29, 2025
|
|
Setup with the desktop is done
|
|
0
|
136
|
September 28, 2025
|
|
CashyOS (Arch) ollama / docker iGPU recognition
|
|
3
|
269
|
September 17, 2025
|
|
Is the Framework Desktop suitable for this workload?
|
|
3
|
369
|
September 17, 2025
|
|
AMD AI Max+ 395 128GB with cline
|
|
14
|
1018
|
September 5, 2025
|
|
PyTorch w/ Flash Attention + vLLM for Strix Halo
|
|
1
|
1067
|
August 31, 2025
|
|
What is the situation with Copilot+ on Frameworks with the Ryzen AI chips?
|
|
6
|
427
|
August 2, 2025
|
|
Help Me Make Up My Mind (FW13 Ryzen AI 9 HX 370)
|
|
18
|
2762
|
July 11, 2025
|
|
Cooling solution & some benchmarks
|
|
6
|
662
|
June 13, 2025
|
|
LLM Performance
|
|
26
|
5601
|
June 11, 2025
|
|
Framework Laptop 13 Ryzen 7040 - Ryzen AI NPU use cases
|
|
7
|
701
|
June 3, 2025
|
|
Why is there no ryzen AI on the 16
|
|
16
|
1814
|
October 21, 2025
|
|
AI Performance
|
|
4
|
1376
|
April 13, 2025
|
|
Ollama 0.6.2 Released WIth Support For AMD Strix Halo
|
|
0
|
880
|
March 20, 2025
|