I am aware, I dabble in localllama, which is what prompted me to make this post. More VRAM will at least let me load a 70B model (although with some quantizations I can load it on my 24GB desktop GPU today), but this in theory lets me test larger models, even if the tokens/second are a lot slower. My hope was that it’d be at least faster than using the CPU + RAM alone, if this were possible.
With unreleased products in general, no one knows the implemented limitations until the product is in hand. With that, I’d say, hope for the best, plan for the worst.
There actually are two settings but not clearly labeled.
I’m not at my Framework laptop right now but there was something along the lines of UMA_AUTO set as default, with the alternative option of UMA_GAME_OPTIMIZED. Auto dedicated 512MB to VRAM (out of the 64GB I had installed). Game optimized dedicates 4GB out of 64GB. Not sure if there are other changes associated with this option in addition to the VRAM though.
I’ve been wondering the same thing. Would be nice to have that support in the bios.
It seems like it’s true that it will dynamically allocate ram for the GPU as needed (or up to half of RAM maybe?), but a lot of the tools query the availability up front and fail.
On the plus side, enough people are interested in APUs that they’re working on shared ram issues. I saw this the other day:
The code is tiny, and saw one report that it worked on a 6800hs, but it’s definitely a bit of a hack, and specifically for pytorch.
I wouldn’t want to count on framework adding these options to the bios, so probably gonna have to cross our fingers that more support for shared ram gets more code soon.
I’m still unsure if the max dynamic allocation is half of system ram, or if that also gets changed on individual laptops, and I already feel like I’m making a lot of tradeoffs to support framework haha
Max dynamic allocation is half, which IIRC is a limitation of either the drivers or the OS (can’t remember which). I’ve seen people mention there’s a workaround to allow more, although I haven’t found it.
Edit: Here’s some discussion about this (including how to override it on Linux).
GitHub - DavidS95/Smokeless_UMAF
with 64 ou 96 GByte of RAM can be nice if we can have 32/48/64 GByte for run some demanding IA (Mistral LLM like open-mixtral-8x7b that nead ~ 30GByte: Model Selection | Mistral AI Large Language Models . I test it with CPU (16 core Ryzen 5950X) a bit slow but good result with AMD LM-Studio (https://lmstudio.ai/) . I like to use the GPU can be noticeably faster but now can’t use gtt memory only vRam
A6000 and H100 is good for training and/or run large batch size of IA model (LLM in this case) But only memory is neaded for local (ie only 1 batch) inferance of LLM.
open-mixtral-8x7b work on 16 core Ryzen 5950X (slow) can’t test but I thing with good speed on Zen4 (AMD Ryzen 9 7950X3D). So prety sur it can be even faster with the RDNA3 iGPU of the 7840 (U/HS)
(and may be even better when AMD allow to use the NPU … )
Really? I didn’t try this llama.cpp version. Do you mind sharing the results you’ve seen?
The performance of StableDiffusion was much better, in my experience, when it used the iGPU than on the CPU alone. But, I could only use it within the UMA memory limits, of course.
Was a while ago and I didn’t store results, I was playing with llama2 70b and got around 2 tokens/s on the cpu and a bit over 1 on the igpu. I did verify that it was using the gpu, amdgpu_top showed full load and apropriate vram usage. I do not really know what I am doing though.
I’ve also tried llama.cpp, and yeah it does seem like it’s quite a bit slower on the GPU vs just on the CPU on the 7840U.
On the GPU on really large models I had my gpu crash (forget which kernel version / driver versions I had though), I only did a short test.
In theory with 96GB of memory I could run really large models, but they take a long time right now, and haven’t really find a use-case to explore this some more.
I just ran some tests with a 7B model. The GPU version compiled with the LLAMA_HIP_UMA=ON option outperforms the CPU by an order of magnitude, ~172 t/s vs ~15 t/s (this is on battery Power Save profile):
I have 2 32GB DIMMs installed, running arch linux with kernel 6.8.7
Also I noticed when running phi3 with ollama in high performance power mode that using an HIP_UMA patched version (~12 t/s) that I built is slower that the CPU version (~20 t/s). The model is also small enough that I can fit it in 4GB vram and I get around 26 t/s. This was with a simple prompt, no benchmark tho.