Framework Laptop 16 High-End GPUs?

I will think about it, I have no need to cancel at this moment as shipping will be months away. 0.4kg probably is not a deal breaker anyway, besides neuro may very well be correct in suggesting it is conservative number.

1 Like

What is the downside of getting an eGPU here?
It’s faster to switch, it doesn’t wear out as much, you can even plug in other GPUs (upgrade) and use it on other laptops/computers.

Definitely will not be hotswap

3 Likes

You can’t replace the part which should see the most wear. Half of the connector is literally part of the motherboard PCB and the GPU PCB. On that side, all the datasheets and pictures just show the contacts being gold-plated copper pads which are part of the structure of the PCB, in no way replaceable.

I do think Framework will probably find that the true durability is better than initially stated in the datasheet. But I wouldn’t bet on it being good enough to tolerate everyday swapping for years without issue.

If one accepts that this is mainly for upgrade and infrequent changes, then there is no issue. Remember that removing the expansion bay requires removing the keyboard & other input modules, removing several screws and dealing with what could be a slightly fiddly connector cable within the laptop. This isn’t a pop off / pop on procedure. I would think this would natural limit all but the most stubborn people.

3 Likes

The connector quoted with a better-than-expected durability is not the Framework Expansion Bay Module connector, which is still rated for just 50 cycles.

Most eGPUs are heavily bottlenecked by Thunderbolt 3/4 and USB4’s PCIe 3.0 x4 tunneling, which is a far cry from the Expansion Bay Module’s PCIe 4.0 x8 connection (16x the bandwidth).

1 Like

I wonder if the dGPU can be disabled in BIOS or something. Leave it in but save power. Or was weight the concern, @Random_Matt ?

Back on the performance topic, hoping to see higher end dGPU modules. But AMD haven’t even released the full desktop 7800 yet, so fully braced for a pretty basic GPU over the next couple years. Will give me a great reason to upgrade that bit later!

1 Like

Weight. 0.4kg is not really a game changer though to be honest.

1 Like

I was wondering if it was possible to add solder to the pads on the GPU side to increase their resilience to wear. Wouldn’t touch the motherboard side and just keep that connected, but a test on an empty shell might be worth it. Would be difficult to get flat but sounds like an idea.
For anyone interest, perhaps the GPU module top layer will be the same as this SSD expansion test, which is 1.5mil (not sure how to interpret that though):

Yeah I will likely also only buy a dGPU module later. The verge video did mention it was only about 2x the performance of the iGPU which is just… not enough to justify it IMO. But most importantly there has not been enough information by Framework about it yet:

  1. How does switching between iGPU and eGPU look like (also please on linux, hopefully without rebooting), and how low-power can you get the eGPU when not in use?
  2. How is the relative performance compared to both the low and high-end iGPUs? Either internal or better external benchmarks would be great - I know it’s early but I also can’t be expected to make a decision based on nothing
  3. More info on the USB-C port on the GPU module - does it support MST for multiple displays? Why did you not just put full size DP and HDMIs on the module - I feel like the fixation on USB-C here is quite limiting, especially since you’re already not passing a video output through to the expansion cards, so an external dongle is a requirement.

I do have hopes of being able to play some light VR on the FW16 in the future (so I don’t have to buy a completely new PC ontop of this expensive laptop), but I don’t think this is it yet. I feel the first generation of GPU is not quite living up to the potential of the module bay.
Also note even though I talk VR and faster GPU, I would not buy a Nvidia GPU bc of linux support (though VR is likely to happen in Windows), so I realize I might just have to wait out a potential nvidia release before a more powerful AMD option could release…
Anyway, I’ll have to wait for reviews for real world performance and then decide - no Starfield:(

1 Like

2x performance is an underestimate in my opinion. It should be at around 3x.

1 Like

Unfortunately, I think any faster gpus from AMD this generation will have to be either navi 32, or navi 31 based, both of which are chiplet designs. I know that for the chiplet based cards they’ve released on desktop so far (7900xtx, and 7900xt), severely underperform in vr applications for various reasons. The 7900xtx actually runs slower than the previous generation 6900xt in most vr applications. The desktop 7600 (based on the same silicon as the 7700s) doesn’t seem to suffer the same issues, and performs about where you would expect it to compared to it’s standard gaming performance uplift (see some benchmarks here AMD Radeon RX 7600 Review: Affordable RDNA3 For 1080p Gamers - Page 3 | HotHardware)

I wouldn’t be surprised if a theoretical 7800s based on navi 32 is actually slower than the current lower end gpu in VR. You might be better off waiting for next gen rdna3+ or rdna4 parts, and hoping that AMD figures out the issues with their new architecture in regards to VR.

1 Like

The desktop 7600 based on the same silicon is about 2.5x faster than the 780m igpu depending on memory configuration according to the rough figures from techpowerup’s relative performance charts (TechPowerUp). With the 7700s being power limited quite a bit more than the desktop 7600, I don’t think 2x performance is unrealistic. I’m sure in some memory bandwidth sensitive applications, the 7700s will pull way ahead though.

edit: actually, I just realized that techpowerup has some data for the 7700s directly (TechPowerUp), and it should actually be very close in performance to the desktop part according to them. That would put performance at about 2.2-2.5x of the 780m, but it will obviously depend a lot on the workload since there are so many differences between them.

4 Likes

TechPowerUp’s performance estimations (when they haven’t directly reviewed a GPU) is based on theoretical compute and overall architecture scaling. I would take it with a grain of salt. It can be better or worse.

5 Likes

That’s definitely true. I think that the numbers seem reasonable though (at least comparing the 7700s to the 7600) if AMD’s quoted “game clocks” can actually be sustained at 100w.

1 Like

Not sure higher end amd options would necessarily be worse, only found one Youtuber with stutters (apart from oculus software issues), and he didn’t mention those in his followup, so idk
But still very much on the fence, if the performance is about on the level of a desktop 3060, that would be plenty for me, and I’d probably get it even if it’s only ~2.5 times faster for so much more TDP. But def need more info from Framework

There have definitely been issues with the navi 31 cards in VR, and I haven’t seen any confirmation of them being ironed out. If you check out any of the results for VR games in this review (The Hellhound RX 7900 XTX Takes on the RTX 4080 with 50 VR & PC Games – BabelTechReviews), you can see how the 7900xtx is not only significantly slower than the 4080 which it would normally tie or beat in flat screen games, but the frame time plots are much less consistent.

As for whether the card is “worth it” over the igpu, I think that even if it isn’t a sensational performance uplift, it does cross an important threshold where 1080p gaming in modern titles is easy with just a bit of tweaking vs being greatly compromised, or locked to 30fps on the igpu.

2 Likes

Nice, thank you. Good to know. Also seems it’s not necessarily just memory latency from the larger package with the chiplet design. At least if the last SiSoft Sandra benchmark is to be believed, which puts it a bit higher than RTX cards, but even slightly lower than previous gen 6900XT (though higher than 6800XT). So there’s still hope it’s a driver bug. Also only the OpenVR-based synthetic VR benchmark picked up on that, could be a clue.

True, though I’m really only interested in VR and maybe older flatscreen games. But I’m strongly considering it.
Sadly I’d likely have to buy both the shell and the GPU, I do want to experiment with the shell as well, seems relatively easy to make custom PCBs for it

Hah, just 10hrs ago the youtuber I mentioned said AMD fixed the frametime issues

So it really seems it was driver issues and I wouldn’t doubt higher end AMD mobile GPUs will do just fine in VR.
Most likely I’ll wait for a higher end AMD dGPU option since the 780M is really just fine for flatscreen games (especially with FSR to cover the high internal screen resolution).

2 Likes

Oh, well that’s wonderful news! I guess I was just being too pessimistic about the possibility of a driver fix. Sorry for the FUD about rdna3 then.

If it took this long to fix, it is a genuine concern. Glad to see though

1 Like