What do we think of this possible new memory standard (CAMM)?

I would see it as being some time before FW do anything serious with CAMM modules as there is a single supplier currently - that is a recipe for component shortages, and you can almost guarantee that will hit you at the worst time possible. So I wouldn’t expect any visible use of these until there are multiple vendors of the memory modules.


I don‘t know how many laptops Framework currently sells per quarter / year, but since their numbers should be comparatively low wouldn‘t that make them an ideal testing ground for these LPCAMM2 modules, especially if early-adoption-tax for these is high?


History suggests that framework will prioritize allowing memory module re-use for people upgrading from old platforms over performance; see the decision to use ddr4 on 12th and 13th Gen Intel boards when the memory controller supported both ddr4 and ddr5. The igpu on those platforms could have been much faster with ddr5-5600.

If I was betting, I’d guess we don’t see framework consider this until they’re shipping an SOC that doesn’t support DDR5.

The performance and battery life improvements are significant enough that the benefits would outweigh negative aspects of scrapping currently owned DDR5 SODIMMS. Certainly I would be in favor of it. iGPU performance gets much better at little to no cost to battery life or performance stays flat with improved efficiency/battery life. Seems great to me!


I’m for it. Especially since when I’m ready to upgrade from my Ryzen 7000 motherboard, it will go into a desktop case or some other format rather than being thrown away, so my existing SODIMMs won’t go to waste. For me, LPCAMM2 will mostly about improved battery life rather than performance, though some extra iGPU speed would be welcome.


Umph, this is quite the price premium. ~2x the cost of 2x32GB SODIMMs. On the other hand, it’s LPDDR5X, no clue what the general cost is associated with them when they’re soldered down, vs regular DDR5 chips.

On the gripping hand, it’s a brand new form factor, not a lot of production yet, I think it’ll ramp up over the next year or two so modules won’t be quite so expensive.

Also, those power usage difference numbers, wow. Nice.

That’s business as usual for brand new stuff. It’s gonna drop as soon as competitors release their products and even more once it became mainstream.

1 Like

Sure, not unexpected, but I do wonder if LPDDR is actually cheaper at the chip/technology level. I hope it’s not much, if any, price premium on the actual chips themselves.

Hopefully module price will come down a lot in the next year, assuming we get another couple of major players producing modules, I’d expect so. Even if not quite down to current SODIMM prices.

I really hope this is what Framework is waiting for and the reason why they haven’t announced any new models so far this year.

I’m currently still on DDR4 so being able to skip DDR5 SODIMMs altogether would be nice.

On a certain famous website 4800MT/s ddr5 costed the same 20 months ago.

Don’t know anything about LPDDR5 vs DDR5. My uninformed guess is that costs depends more on capacity/density and speed rather the memory technology. But I could be totally wrong.

But from what was reported (see ifixit article and video above) one single lpcamm2 module should require less/cheaper non-memory components compared to a pair of sodimm modules.

Still per-unit costs for material and manufacturing can have a limited effect on price to consumer. In this case probably fixed costs (more relevant for initial low volume) and the market (no competition right now) drive up prices.


On Crucial.com normal kit costs DDR5 64 GB SODIMM costs $229.99, though Framework asks $320 for this set.
The LPDDR5X 64 GB costs $329.99.

So if Framework were to price match the vendor, the total price increase would now be $10. Though I am not sure what the price increase would be for the other parts. It does look much more space efficient, so the rest of the board would gain more space.

Hm, the Framework is 5600, the LPDDR5X LPCAMM2 is DDR5-7500. Which actually, I don’t think they make SODIMMs that are faster than 5600.

So I guess it’s hard to make an apples to apples comparison of pricing.

That’s not actually too terrible of a price premium, at this stage of things.

Kingston offers some 6000 MT/s and 6400 MT/s SODIMMs, however they achieve that by cranking power up by 50% and using non-JEDEC timings.

Currently no CPU officially supports more than 5600 MT/s DDR5 SODIMMs, although they can work on some systems. But 7500 MT/s LPDDR5X LPCAMM2 is supported by AMD (7467 MT/s on Intel).

Gotcha. Wasn’t sure about the SODIMM speeds. And yeah, I figure LPDDR5X would be officially support, since it’d be a JEDEC standard.

So not only is it lower power usage, it’s potentially a decent bit faster for real world performance. Nice. Whenever we do get to get it into more systems.

1 Like

Now we have this!!! Cool thing is that FW could come out with a new MB and this new RAM to be faster than ever.

New Laptop Memory Is Here! LPCAMM2 Changes Everything!


Memory just got a LOT better!

I’m sure next + 1 mainboards they’ll be looking at this. I’m sure next mainboards have already been in layout/development, so probably too far into it for adding this. So either next+1 or next+2 (so 1-2 years from now-ish) is when I’d expect to see it out for FW. Which by that time, I expect (hope!) we’ll have enough volume of modules that there won’t be much of a price premium per module. And they’ll have higher sized modules, 96GB or 128GB, by then. Which unless work VM needs change, I need 96GB. Well, really 64GB, but 96GB has made it so much more easier to use without worrying about the VM OOM killer sweeping through.


I got 64 GB with my FW16.
My reasoning was double what I had. But I was banging my head on the 16 GB I had (soldered in of course) so NO less that 32, and figured since I want to run some stuff in VMs, double that. So 64.
I am not sorry about it either.

Ouch, 16GB soldered on is tough with a lot of VMs or big VMs. Reason I actually need so much, is our full stack keeps a ton of needed stuff in memory for speed. But because it’s normally 1 component per server, but I’m running all 4 parts, so it’s roughly 4x since each one keeps it’s own stuff in it’s own memory space. So when I need to run them all at once for testing/etc, it’s a memory hog. I’d regularly have one of the smaller data store bits that they also depend on killed by the OOM killer.

The 50GB I have allocated is more than comfortable. The ~30GB I was able to allocate before was a bit tight. Showing ~20GB free, so that’s I might decrease the available RAM a bit.

1 Like