Yeah, obviously everyone has their own specific ideal use cases, but it seems like an Oculink bay would be more generally useful than a (literal) high power GPU that requires it’s own power brick.
-
The current R7700S has a TDP of 100W.
The cooling solution is amazing tho, it holds <70° C on max load. There is some thermal headroom in the existing design. -
The Interposer has a 20V, >10A max power spec. I’m guessing that a practical limit for a cooling solution and power delivery (when we’ll have 240W USB-C chargers) could be around 125-150W for the GPU.
-
The 7900M has a rated TGP of 180W, cutting it to 150W would still give more performance than a 7700S.
We’ll see how things roll out, but I guess that Framework is more focused on future-gen support than actually upgrading the 7700S… That would mean redesigning the cooling solution and the dGPU shell. My bet is that they’re “waiting” for an hypotetical 8700S with a TGP around 100W so they can offer the upgrade.
Regardless, if they would offer a 7900M I’d buy it on the spot. Modularity means big gpu + shell for my use cases, the bigger the gpu the better. I can always swap it out in 120 seconds.
Even if lowering the wattage works out, with the current cooling solution being barely able to keep up with 100W, that’s going to be a huge hurdle to take and will prolly take quite some time to overcome.
I share that view.
Yep, that’s what I meant with “redesigning the cooling solution”. They chose the best possible GPU for the constraints they had, and my guess is they’ll stick to this performance class.
One thing i’d LOVE to see from them is an external GPU dock for Framework dGPUs. All it takes is someone smarter than me to design a PCI-e to Interposer adapter, but we are already seeing the first small steps.
Ignoring everything else, 180W GPU, 180W charger, 45W CPU, that would be more than “just slowly” even if you used other power mode so the CPU uses less watts, I would expect, at maximum, 2 hours unless you use a 240W charger
Technically a 180W GPU could be supported, 180 + 45 (CPU) = 225W CPU+GPU leaving a little for the rest of the system, so with a 240W charger, could be doable, but I don’t think they’ll take that route, I doubt they’ll release a GPU over 150W as 150W is what is usually now seen in high end laptop GPUs, I know it’s variable, but many RTX 4090 Mobile run at 150W TGP.
So unless one overclocks it, 150W is my guess, that leaves more energy available for the rest of the system and a then feasible 16 core CPU.
My bets are on RDNA 4 “top die” RDNA 4 seems it’ll target the midrange (no big chips) targeting in desktop around an RTX 4080 it seems.
And if one wants high end in desktop, RDNA 4 may not interest you, but in laptops…, that’s other story, the RTX 4090 mobile uses the AD103 die, the same that is inside the Desktop’s RTX 4080, It doesn’t perform like a 4080, more like a 4070/3090 due to power constrains but it’s the same chip.
If the biggest die of RDNA 4 plans to be around an AD103 (4080 die) that would mean that we could see the best RDNA 4 die on laptop fighting in the high end potentially even beating a mobile/laptop RTX 4090.
So I would like to see other vendors, but for the next round of FW 16 round of GPUs, what I would expect is that we will have 2 AMD offerings:
One for the mid range direct (100W TGP) a 8700S or something
And a high end option (150W) 8000 somethign, however they call it.
And maybe, just maybe, after that gen, I see posible that Nvidia gets on board.
I don’t see Intel here for a while if they hopefully continue in the market of GPUs
I think there would be too little power headroom for the system. The PSUs aren’t 100% efficient, which means 240W is the best-case scenario. Furthermore, 240-225=15W headroom is itself possibly drawn by 2 SSDs and RAM. But using a 150W GPU might be possible, at least with a better chance of success.
That said, the Framework Laptop 16 has some great days ahead !
(and we’re a lot hoping for higher end GPUs ^^)
I’m not sure about laptop PSUs, but for standard ATX PSUs the specified wattage is for the secondary side, so a 600w PSU will give out 600w (even a bit more for a short duration), while it takes about 675w from the socket, given 89% efficiency. Those 75w are converted to heat.
I think the thermal design is already close to its limit.
My bet is that the next GPU will be in the 100w-125w range to avoid overwhelm the bay’s heatsinks and the PSU. We’ll still get a mid-range card IMHO.
But let’s keep in mind that mid-range 2023 <<< mid-range 2025/6
All of the pd power supplies I have ever used work like that too.
What however is an issue not jet mentioned is vrm losses, the cpu and gpu power limits are based on their respective power rails not the battery/psu rail.
The gpu thermal design is also completely replaced when swapping expansion bays
Very true! But I think that re-engineering the whole thermal solution every generation would be… Expensive.
I really want to see a high-end gpu in this machine, but my hopes are low. If I were Framework I’d commit to the design of the physical parts and change the pcb to maximise profit.
Hope they won’t tho
I think so, too. According to rumors from last year ( https://twitter.com/All_The_Watts/status/1665360849584934912 ) that would include a 7800S (120w) or maybe a 7900S (135w). And then we’ll buy new bags, because we buyed it with the 7700S in mind, while it is very possible, that the expansion bay will grow with bigger chips.
Completely re-engineering everything every generation would be, however I wouldn’t be surprised if they introduced a larger module with capacity for 150w cooling and continue to use that (in parallel with the current 100w module).
Also, the portion of the thermal solution that touches the GPU core die does need minor re-engineering with some generations due to different GPU core dies being different shapes, sizes, and thicknesses. For example Framework engineered a slightly tweaked thermal solution for the AMD Framework 13 motherboards because it makes slightly better contact (and therefore cooling) with the AMD CPUs but worse with the Intel CPUs.
I’m no field expert, but your reasoning is sound to me.
Anyway, no cost is better than any cost, and even small changes to the heatsink module would require manufacturing diversification and/or opening more lines to produce the new variant etc.
I’m excited about a potential 7900S tho, crossing my fingers!
Under that premise I at least hope, they’ll introduce a low power profile with the next gen dGPU module, that needs less than 80W in order to address the two main bad issues with the current dGPU module:
-
The power adapter not being able to provide enough power for the whole system under heavy load, resulting in drawing power from the battery even when connected to the power supply unit. Under those circumstances, the battery doesn’t last long if unplugged, resulting in mobile gaming being a far away dream for the FL16 currently.
-
The thermal issues resulting in loud fans. Due to more and more of the system management being controlled automagically, undervolting manually seems to have become complicated.
Ideally, a future gen. dGPU for mobile gaming, drawing less power than the 7700S while delivering a better performance would be the best answer to this problem, but sadly that doesn’t seem to be realistic at least for the upcoming few years.
A builtin undervolting feature / low power profile would be one way to address said issues.
At least I’d be willing to accept this tradeoff in performance to address the power and noise issues.
Buy a laptop for 2.5-3k and then suffocate it? Don’t you think it’s not worth it? The company needs to make a power supply unit for 240w as soon as possible.
I’d rather call it power optimization than suffocation, as most consumer electronics nowadays run more effectively with less than standard power levels, because they’re usually using more power than they’ve been designed for to squeeze the last bit of performance out of them.
There is a growing number of people who prioritize their health and the environment over their laptop’s peak performance. It’s hardly any fun to use any laptop when you’re sick or out of power. You seem to rather suffocate yourself than your laptop, just because of its price tag.
Framework wants to better the environment, so they’ve got to think about noise pollution and power wastage in the long run, too.
I’d run mine at 80W if I could.
I already framerate limit and run at lower resolution to bring noise and thermals down for personal comfort in use.
If I could just set a bios setting to power throttle the GPU to any hard limit I wanted, that lessens my need to manually edit/giggle with game settings, AMD adrenaline and windows to accomplish the same thing.
At this point, massive expansion bay GPUs should take external power from a second USB-C cable. No way GPUs can just keep getting bigger while USB-C as a standard is still capped to 240W. I honestly cannot conceive of a reason why USB-PD would go beyond 240W.