Ah yes getting bleeding edge tech of aliexpress is allways a good experience XD
The backorder period is done and my 96GB will arrive this week. Framework kindly removed the 64GB RAM I had on my preorder. Now I just need to get the laptop!
Every discussion about the new non-power-of-two sized kits immediately veer into discussions about value and “how much ram you really need”. What I have yet to find is: why? Why have we never seen stick density at this size? Was it only possible in DDR5 or with modern controllers? Is it a physical size thing? Is there a different architecture that allows arbitrary die counts? Is this an nVidia moment where there are six dies of one size and two dies of another for yield purposes?
Having dealt with the “only get Samsung B-Die” days of Ryzen, I want to know if weird corners are being cut or these introduce esoteric considerations. Especially if something was snuck in that affects performance, like a triple-rank architecture or something that affects timing/voltage compared to “normal” 16 and 32 gig sticks.
Given that you could previously already use 1 32GB Dimm and 1 16GB Dimm per channel on standard memory controllers and not experience any performance degradation, that does not seem to be that crazy (if the modules are matched, do not have conflicting latencies and you do not consider that you probably could not drive 2 Dimms at the full speed the IMC can drive a single Dimm).
All that means is, you are not getting the full use out of the most significant address bit and to check whether a certain location still exists you can no longer say “bits 0-20 are valid, everything above 20 does not exist”.
But if 128 GB was already supported and you could already have total amounts of RAM that are not power of 2, then this support was present all along. So my guess is, that this comes mostly down to firmware for the IMC setting everything up correctly. And this also looks to be where AMD had a few problems / did not have their firmware ready on launch, whereas Intel did. So I would not even think, that that is some new mandatory feature with DDR5, but simply was theoretically possible for a while, but previously the gap between 2 memory sizes was not wide enough for anybody to deal with the testing to ensure it actually works in the real world.
Edit:
my 48GB Dimms (not So-Dimm) have 2 ranks. I cannot tell, whether 1 rank is just 32GB and the other 16GB, just like you could have done with 2 separate Dimms in the same channel or whether both ranks just have a top address that is not a power of 2. But I do not see where that would make a difference in performance, if the hardware is capable of representing it.
I personally think it’s only a matter of time before 256GB DRAM is supported, and Micron is preparing for mass production of new memory modules.
I did go for the Mushkin 96GB 5600MHz kit in the end, and it works really well! Part number is MRA5S560LKKD48GX2 and it is currently available on Newegg for quite a bit less than I paid.
I thought there was a problem at first because the display was completely blank, but after taking the memory modules out and swapping them it worked. I’m guessing they just weren’t fully seated despite having clicked into place.
Would still buy 128GB though…
Same for me. But there’s no 128go kit actually so, if you find something tell us ! In the mid time my 2x 32Go Kingston Fury Impact will do
If you look at the 48GB sodimms, they are pretty much completely filled with the highest density ram chips you can currently get.
We are going to need denser ram chips before we can have bigger sodimms but that’ll happen eventually. We are still pretty early in the DDR5 cycle.
The DDR4 standard topped out at 32 GB per stick for unbuffered RAM. Larger sticks were possible with registered memory, though that’s normally only seen on the full size DIMMs rather than SODIMMs because the big sticks can hold a lot more RAM chips.
The DDR5 standard is designed to accommodate larger RAM modules. The limiting factor on laptop memory now is not the standard, but how many chips can be squeezed onto the little SODIMM, and on the fact that an unbuffered memory module can only support a limited number of RAM chips. Unlike previous standards, DDR5 is designed to allow for modules that don’t have a power-of-two size. We’ll certainly see even larger SODIMM sticks in the future; they will probably work in the Ryzen-based Framework laptops, but we’ll have to wait and see.
@bud thank you for reporting back on this! i’m having trouble finding a supplier for that kit right now (newegg and i have had some disagreements in the past that preclude future business), but i’m very glad to hear the board supports that much ram!
Crucial has since started offering a 96GB kit (CT2K48G56C46S5) if that is easier to find. No idea if it works, but it seems that others are having decent luck with Crucial kits in general (5600 MHz anyway).
oh hey that is easy to source. i’m a little more trepidatious with grabbing a kit someone else hasn’t tested, but i’m going to think real hard about this while i arrange funds for my framework.
edit: i notice the knowledge base has this:
Note: We do not currently have XMP memory support on the Framework Laptop. We recommend using DRAM that natively runs at DDR4-3200 speeds. While XMP memory should safely fall back to a slower speed, we have seen customer reports of some XMP memory modules from HyperX and other brands not booting, especially when used in Channel 0.
and looking at a post in the q&a from micron on the amazon listing for the part you linked:
We would like to inform you that Crucial DDR5 RAM’s are JEDEC RAM’s and need XMP profile to be enabled from BIOS to get advertised speed.
that’s concerning. has the xmp situation changed since that kb article, or in particular with the amd board?
I’m not a RAM expert but that article is definitely out of date because the AMD Framework requires DDR5, not DDR4. Not sure about the XMP stuff though, so I see why you might want to go with the Mushkin.
looking at other threads, it seems like xmp is still a problem. unfortunate. i might just have to shrug and settle for a 2x32 kit. truth be told i don’t NEED 96 total capacity, as 64 is enough for my workloads, it would just be nice.
well now that the thread has gotten rather abbreviated, and i find that my last post from november is a little bit outdated, i should revise that my intent has been firmed to get the 2x48 crucial kit for my upcoming machine, since it’s been demonstrated to work adequately well for both FW13 and FW16 amd boards.
Yeah, I’m still tossing up between the 2x 48GB which are CL46, or getting a 2x 32GB which can be obtained with CL40 if you select the right brand. But I suspect I wouldn’t notice any real significant performance difference in real world situations. The price difference also seems to be reasonably minimal.
in my case, i don’t think i’d be able to measure the difference between those two on a raw performance standpoint.
i am, however, able to pretty concretely perceive the overhead of spinning up an entire vm on a regular basis that i’d much rather just leave idling until i need it.
i think as of today, Framework is selling a 2×2 48GB, 96GB total, configuration directly! (For the Framework 16 Ryzen 9 at least.) Probably thanks in part to bud and others’ work testing here!