No new information on 16 this time

People keep talking about soldered RAM as if it’s the only option for higher performance memory. Personally, I would prefer to see LPCAMM used to maintain upgradability while still benefitting performance.

4 Likes

Thankyou. The big thing would be if Framework announces LPCAMM2 with LPDDR6 on them, that would be big but hey, we’ll see

low chance on the LPCAMM2/LPDDR6 though

There’s a new thread on the forum with a person working on an MXM adapter for the expansion bay. This could be a cool way to be able to have an Nvidia GPU in the Framework without Nvidia’s consent.

3 Likes

That would be an interesting approach. Are there any numbers on MXM vs external GPU performance? I don’t remember if people have mentioned how well external GPUs work but if the bandwidth on MXM is better than USB-C external GPUs it’s likely a very good option.

I mean when they were talking about soldered ram in the desktop, it was AMD that said the RAM had to be soldered - https://youtu.be/-lErGZZgUbY?si=ox1ZdKkvULNL7evH&t=450

Hey! That’s me. Looks like we should be able to put a 4060, 4070, 4080, or 4090 mobile in the slot. It’s likely to be pretty bulky but it should work. V0.3 launching this afternoon with some updates. If you’re an electrical engineer I’d love some help - either DM me or send a message on the thread linked above.

3 Likes

It’s less about performance and more about portability. It’s all great to use an egpu - They’re much faster - but the point of a laptop is for it to be, well, a laptop. With an mxm card even if it’s a bit bigger it is feasible to carry around and run from the laptop battery.

2 Likes

Oh boy. A 4090 Mobile? In my Framework 16? Such a dream… Are MXM cards actually available to buy ad a private citizen?

1 Like

MXM supports up to PCIe x16, so no bottleneck like OcuLink x4 or Thunderbolt. Also agree with Joe that it allows you to take the full package around.

The only thing is that MXM has a pretty limited power envelope of up to 120W or 150W, I think, so the performance will be lower than on desktop even for the desktop chips.

1 Like

yup :slight_smile:

1 Like

[DISCLAIMER - not a mechanichal engineer]
Dimension-wise, I don’t think there’s huge room for improvement in the thermal department… The 7700S is capped at 100W, and the current cooling system holds up well, but I don’t think that >120W of dissipation is feasable without having the FW16 look like a giant brick :laughing:

1 Grand, oof! :joy:

Yeah I know… It’s not great, but I don’t think it’s insane either. Something like the 4080 mobile though which is sold by the same site in the same form factor and is much more affordable could make sense for many more people. It’s still a leap from the 7700s, is just as power efficient, and also unlocks the world of DLSS/CUDA for those who really rely on Nvidia. Remember that the whole point of developing an MXM adapter is that, (assuming the standard doesn’t die completely), you could upgrade the same module to a 5090 when/if an mxm version releases.

Agree, lotsa caveats for this one. Most likely would need a bigger enclosure and a custom cooling solution instead of reusing the 7700S’s. But with all that it could kinda work and be not too-too big, would unlock Nvidia availability and release the dependency on Framework for upgrades. Honestly, IMO it looks so good, Framework should make this product themselves.

1 Like

Well, what did you expect for a 4090? The desktop one was announced at 1.6 grand. Obviously the performance of the 4090 laptop is not even equal to a 4080(S), rather a 4070(S/Ti), but it’s not that crazy expensive.

The big problem I see in this project is… MXM is not a standard, every MXM card is different (look at mounting points for coolers for example), and each different GPU would require substantial effort to be integrated in a FW16.

I’d really want this project to succeed though!

Well, I can kinda see the 9060 XT in the FW16, however 150W is very much the limit of what is even possible with a 240W charger while still having a CPU :sweat_smile: and charging functionality , and I doubt they’d get it cooled.

I do agree on the CPU side, however neither Strix Point (Ryzen AI) nor Halo (AI Max) are going to cut it for the full functionality required with the FW16. You have only 16 lanes to use, however you need 17 or 18 I think. This comes from: 8x for the GPU, 8x for both possible SSDs and 1x or 2x for the WiFi. If we’re looking at launches form 2025 and later, the only CPUs that kinda work are the Fire Range “desktops” and Ryzen 200. Fire Range is just a beast but basically requires dGPU and a nuclear reactor and Ryzen 200 is just Ryzen 7000/8000 refresh.

I just hope they announce some GPUs, even though AMD’d need to make them first of all.

1 Like

Kind of yes and kind of no to the, “different size/interface,” thing. The 4080M and 4090M are the same size and have their dies in the same place, so one cooler will cover both. Likewise the 4070M, 4060M and 4050M are all the same size, and hence also only need one cooler between all three. I’m not planning to design heatsinks for thousands of cards, but I do think that there is just about enough interest, especially for the higher end cards, that developing it could be worthwhile. As a community we also don’t have much choice - MXM is the last surviving standard small enough to actually fit into the FW16, so it’s MXM or nothing sadly

Well, “last surviving standard” is sadly not even gonna cut it. It’s honestly a miracle there is still MXM cards at all, with no reasonably current laptop I know actually supporting it. That’s why I was so surprised a year ago when I saw that there actually are 4090 MXMs.

AMD worked with Framework to see whether it was possible to use LPCAMM2 in the Desktop. AMD concluded that it could not be done; only soldered RAM would work. So it’s not going to happen in this generation. A future successor to Strix Halo might be able to work with LPCAMM2; we’ll just have to wait and see.