I will think about it, I have no need to cancel at this moment as shipping will be months away. 0.4kg probably is not a deal breaker anyway, besides neuro may very well be correct in suggesting it is conservative number.
What is the downside of getting an eGPU here?
Itâs faster to switch, it doesnât wear out as much, you can even plug in other GPUs (upgrade) and use it on other laptops/computers.
Definitely will not be hotswap
You canât replace the part which should see the most wear. Half of the connector is literally part of the motherboard PCB and the GPU PCB. On that side, all the datasheets and pictures just show the contacts being gold-plated copper pads which are part of the structure of the PCB, in no way replaceable.
I do think Framework will probably find that the true durability is better than initially stated in the datasheet. But I wouldnât bet on it being good enough to tolerate everyday swapping for years without issue.
If one accepts that this is mainly for upgrade and infrequent changes, then there is no issue. Remember that removing the expansion bay requires removing the keyboard & other input modules, removing several screws and dealing with what could be a slightly fiddly connector cable within the laptop. This isnât a pop off / pop on procedure. I would think this would natural limit all but the most stubborn people.
The connector quoted with a better-than-expected durability is not the Framework Expansion Bay Module connector, which is still rated for just 50 cycles.
Most eGPUs are heavily bottlenecked by Thunderbolt 3/4 and USB4âs PCIe 3.0 x4 tunneling, which is a far cry from the Expansion Bay Moduleâs PCIe 4.0 x8 connection (16x the bandwidth).
I wonder if the dGPU can be disabled in BIOS or something. Leave it in but save power. Or was weight the concern, @Random_Matt ?
Back on the performance topic, hoping to see higher end dGPU modules. But AMD havenât even released the full desktop 7800 yet, so fully braced for a pretty basic GPU over the next couple years. Will give me a great reason to upgrade that bit later!
Weight. 0.4kg is not really a game changer though to be honest.
I was wondering if it was possible to add solder to the pads on the GPU side to increase their resilience to wear. Wouldnât touch the motherboard side and just keep that connected, but a test on an empty shell might be worth it. Would be difficult to get flat but sounds like an idea.
For anyone interest, perhaps the GPU module top layer will be the same as this SSD expansion test, which is 1.5mil (not sure how to interpret that though):
Yeah I will likely also only buy a dGPU module later. The verge video did mention it was only about 2x the performance of the iGPU which is just⌠not enough to justify it IMO. But most importantly there has not been enough information by Framework about it yet:
- How does switching between iGPU and eGPU look like (also please on linux, hopefully without rebooting), and how low-power can you get the eGPU when not in use?
- How is the relative performance compared to both the low and high-end iGPUs? Either internal or better external benchmarks would be great - I know itâs early but I also canât be expected to make a decision based on nothing
- More info on the USB-C port on the GPU module - does it support MST for multiple displays? Why did you not just put full size DP and HDMIs on the module - I feel like the fixation on USB-C here is quite limiting, especially since youâre already not passing a video output through to the expansion cards, so an external dongle is a requirement.
I do have hopes of being able to play some light VR on the FW16 in the future (so I donât have to buy a completely new PC ontop of this expensive laptop), but I donât think this is it yet. I feel the first generation of GPU is not quite living up to the potential of the module bay.
Also note even though I talk VR and faster GPU, I would not buy a Nvidia GPU bc of linux support (though VR is likely to happen in Windows), so I realize I might just have to wait out a potential nvidia release before a more powerful AMD option could releaseâŚ
Anyway, Iâll have to wait for reviews for real world performance and then decide - no Starfield:(
2x performance is an underestimate in my opinion. It should be at around 3x.
Unfortunately, I think any faster gpus from AMD this generation will have to be either navi 32, or navi 31 based, both of which are chiplet designs. I know that for the chiplet based cards theyâve released on desktop so far (7900xtx, and 7900xt), severely underperform in vr applications for various reasons. The 7900xtx actually runs slower than the previous generation 6900xt in most vr applications. The desktop 7600 (based on the same silicon as the 7700s) doesnât seem to suffer the same issues, and performs about where you would expect it to compared to itâs standard gaming performance uplift (see some benchmarks here AMD Radeon RX 7600 Review: Affordable RDNA3 For 1080p Gamers - Page 3 | HotHardware)
I wouldnât be surprised if a theoretical 7800s based on navi 32 is actually slower than the current lower end gpu in VR. You might be better off waiting for next gen rdna3+ or rdna4 parts, and hoping that AMD figures out the issues with their new architecture in regards to VR.
The desktop 7600 based on the same silicon is about 2.5x faster than the 780m igpu depending on memory configuration according to the rough figures from techpowerupâs relative performance charts (TechPowerUp). With the 7700s being power limited quite a bit more than the desktop 7600, I donât think 2x performance is unrealistic. Iâm sure in some memory bandwidth sensitive applications, the 7700s will pull way ahead though.
edit: actually, I just realized that techpowerup has some data for the 7700s directly (TechPowerUp), and it should actually be very close in performance to the desktop part according to them. That would put performance at about 2.2-2.5x of the 780m, but it will obviously depend a lot on the workload since there are so many differences between them.
TechPowerUpâs performance estimations (when they havenât directly reviewed a GPU) is based on theoretical compute and overall architecture scaling. I would take it with a grain of salt. It can be better or worse.
Thatâs definitely true. I think that the numbers seem reasonable though (at least comparing the 7700s to the 7600) if AMDâs quoted âgame clocksâ can actually be sustained at 100w.
Not sure higher end amd options would necessarily be worse, only found one Youtuber with stutters (apart from oculus software issues), and he didnât mention those in his followup, so idk
But still very much on the fence, if the performance is about on the level of a desktop 3060, that would be plenty for me, and Iâd probably get it even if itâs only ~2.5 times faster for so much more TDP. But def need more info from Framework
There have definitely been issues with the navi 31 cards in VR, and I havenât seen any confirmation of them being ironed out. If you check out any of the results for VR games in this review (The Hellhound RX 7900 XTX Takes on the RTX 4080 with 50 VR & PC Games â BabelTechReviews), you can see how the 7900xtx is not only significantly slower than the 4080 which it would normally tie or beat in flat screen games, but the frame time plots are much less consistent.
As for whether the card is âworth itâ over the igpu, I think that even if it isnât a sensational performance uplift, it does cross an important threshold where 1080p gaming in modern titles is easy with just a bit of tweaking vs being greatly compromised, or locked to 30fps on the igpu.
Nice, thank you. Good to know. Also seems itâs not necessarily just memory latency from the larger package with the chiplet design. At least if the last SiSoft Sandra benchmark is to be believed, which puts it a bit higher than RTX cards, but even slightly lower than previous gen 6900XT (though higher than 6800XT). So thereâs still hope itâs a driver bug. Also only the OpenVR-based synthetic VR benchmark picked up on that, could be a clue.
True, though Iâm really only interested in VR and maybe older flatscreen games. But Iâm strongly considering it.
Sadly Iâd likely have to buy both the shell and the GPU, I do want to experiment with the shell as well, seems relatively easy to make custom PCBs for it
Hah, just 10hrs ago the youtuber I mentioned said AMD fixed the frametime issues
So it really seems it was driver issues and I wouldnât doubt higher end AMD mobile GPUs will do just fine in VR.
Most likely Iâll wait for a higher end AMD dGPU option since the 780M is really just fine for flatscreen games (especially with FSR to cover the high internal screen resolution).
Oh, well thatâs wonderful news! I guess I was just being too pessimistic about the possibility of a driver fix. Sorry for the FUD about rdna3 then.
If it took this long to fix, it is a genuine concern. Glad to see though