Framework Laptop 13 Ryzen 300 - Configuring graphics memory

I was wondering if there is any information on how much RAM can be dedicated to graphics in the new Framework Laptop 13 “Ryzen™ AI 9 HX 370”; specifically:

  • How much RAM can be dedicated to the graphics (I believe those processors use up to 32GB)?
  • Is that amount configurable or is it just dynamically allocated? If so, is it configurable in the BIOS, in an app, both (using linux if relevant)?

I’m asking as I’m considering how much ram I should get for my (soon to be) new laptop.

PS: Probably a question for support, I’ll ask here first in case someone already knows the answer. If not I’ll post an update if I end up reaching out to framework.

For most practical purposes it dosn’t matter as main memory act as a fallback. because apu/igpu allocate their video memory from the same physical pool of memory, as main memory, the performance should be virtually the same. The rear exception is what can’t be read directly from main memory, like framebuffers, these resources need to be copied to video memory before use. all modern apu/igpus, as well as well amd dgpus, as far as I know support uniform memory mapping. that is they use the same memory mapping as the cpu, or other connected processor. this mean the only additional cost of using the foreign memory is the difference in physical layout, as their is none it shouldn’t matter as long as all resources that can only reside in video memory fit in the allocated space.

their exist reasons to allocate more video memory then the minimum to fit the framebuffer. when video memory is swap out to main memory it compete with memory pressure of cashed files and anonymous pages to stay in memory, and not be swapped to disk. video memory is generally bandwith sensitive and don’t live well on slow drives.

/Zoe

2 Likes

Fair points - if anyone that stumbles across this thread wants a tl/dr: you don’t get to configure the allocation, it’s allocated dynamically by the system.

edit: doesn’t look like I can mark your answer as the correct one… but it is

1 Like

Wdym 32GB, https://www.amd.com/en/products/processors/laptop/ryzen/ai-300-series/amd-ryzen-ai-9-hx-370.html says this CPU can handle up to 256 GB RAM.

I thought I read somewhere that the dynamic memory allocation for the gpu is limited to 32GB but I’ve not been able to find that anywhere so I think I must have misread something

Ah yes, GTT etc is a separate thing.

FW Desktop promo talks about 96 MB out of 128 MB being shareable.

Considering we won’t be able to fit more than 2 x 64 GB SO-DIMMs into FW13, it’s likely going to be similar?

I’ve also seen talk of 110 GB GTT allocation possible on Linux, but haven’t dug into details of it yet.

I believe you’re talking about the Framework desktop with 128gb memory. the ai 300 lapotop chip is a diffrent family of chips, with a similar name.

/Zoe

Should make no difference to topic at hand? All these modern chips support up to 256 GB RAM, of which only 128 GB is currently possible on laptops with 2 memory slots.

(Not sure if 128 GB SO-DIMMs will ever emerge?)

it does matter as they have diffrent gpus, the gpu maps video memory. the more capable the gpu the more video memry it can map. Now it is possible the two gpus can map the same amount of video memory, in wich case it wouldn’t matter, hover unlikely that might be.

/Zoe

Some relevant links:

Another relevant link: LVFS: Laptop 13 AMD Ryzen AI 300

It does look there might be some BIOS option to tweak the GPU memory allocation

The 96/128 GiB is a Windows/Driver limit. Because Windows hands out that memory (as it cannot use what it let the GPU driver allocate for GPU/hardware use). If Windows / the GPU driver limits this currently to not more than 3/4 of the total memory for GPU, then that is it. But this is not a hardware limit. If it makes sense to change, Microsoft or AMD can change that. With Linux, you may not even have that limit in the first place. Even if you did, you could change it yourself. Look at any GPU you have under Windows. It will tell you how much “shared GPU memory” Windows is prepared to handout. This is currently half of my physical memory (96 GiB) for my dGPU and my iGPU each.

But this also independent of what BIOS options are offered for a static allocation. I don’t know how far AMD’s predefined options go there and how much of those FW will adopt. Since that option runs counter to the concept and benefits of shared memory and is mostly for legacy compatibility, it might not go as high, because there is no legacy game with need for more than 16 GiB of memory. Any newer software that expects more VRAM has no justification for not supporting modern APIs that work on shared memory, dynamically allocated directly.

No. It is shared memory. By its nature, the iGPU can access ALL of the memory the CPU’s memory controller supports. So the max. amount possible in shared memory can be accessed. How much of it you can “map” is much more of a driver/OS thing and not arbitrarily limited by the GPU or hardware, as the underlying architecture (virtual memory) is already scalable.

The hardware can’t access memory that isn’t mapped in the physical address space. How much memory can be mapped depends on the size of the pointers in hardware, typically 48bit. but some graphics card has shorter pointers, or at least did at some point in the past.

/Zoe

You might be confused what “physical address space” is.
Physical memory is the actual RAM. Where the addresses are basically contiguous.
In a system where you have virtual memory, which you will have if you have significant dynamic allocations and don’t want your memory to fragment, the addresses that are used by 99% of software are virtual addresses.

Hardware translates these on accesses to physical addresses. So that you can have contiguous virtual memory ranges for the software mapped to non-contiguous physical memory, which you can swap out or move around without affecting the virtual addresses.

In such a system, a virtual address is resolved to almost any physical memory address. The lookup tables used need to fit any physical address. Because otherwise you could not dynamically allocate the memory in any order.

What you describe as “mapping”, is the OS / driver adding an entry into the respective page table, that associates a free, physical block of memory with a newly allocated virtual block of memory.

But the page table and address translation hardware was already designed to fit any physical memory address and the page tables are already designed to scale. And typically you have more virtual addresses than physical, so you can actually underprovision the memory, so you wouldn’t typically run out of virtual memory address space, before running out of physical one.

Limiting the bitwidths of physical addresses that can be resolved would run counter to everything. That would lead to a system, with supposedly shared memory, where you can access 100% from the CPU, but only the first 25% from the iGPU? This can only work, if the OS will try its very best to never use the first 25% for CPU-uses. Because once those are used, you starve the GPU of any memory. That would lead to static allocation in practice, having to reserve the memory. So no, that is not how that is done.

Also, don’t forget, that we are talking about multiple components, sharing a Die on the same technology node and the core interconnect and memory controller. Why would you want to save a few bits on one of the components attached to such an interconnect, when all the other components can make that work on the same technology node? Its the same interconnect, handling all of it. You would strive to reuse the same building blocks for the address translation in all the places.