I had a 16GB configuration. The GPU can reserve up to 50% of the RAM capacity, so 8GB.
I was limited in the LLM I can run by my RAM, so I decided to upgrade to 32GB.
Framework has a great page about ram compatibility
Framework sells a kit for 90€. I got a compatible DDR5 16GB 5600 CL46 for 54 €.
Upgrading the ram is really easy, it’s the easiest laptop to work with. It’s just 5 screws and one long flat cable. Upgrade worked out of the box without having to fiddle with memory profiles. It’s a memory stick that is baked to work 5600 out of the box.
Latency improved from 197ns to 152ns and bandwidth doubled from 22.1GB/s to 43.3GB/s. I was under the false understanding that with DDR5 there wasn’t a big penalty for running one stick because there were two ranks, this should affect performance in bandwidth starved applications, that for an APU is basically all applications ^^’
I benchmarked some LLM after the upgrade with LM Studio. I can now run bigger models with larger context! Framework is about three times as fast when loading all the model in the GPU, and loading on the CPU makes no meaningful difference to context size.