you could use the extra pcie of the 9x5xHX series for a TB/USB controller. The bigger problem is that the cTDP can’t be lowered to fit into the power profile (45w cpu) of the FL16, else the max boost would be limited to the 55w tdp. It would work in theory but it would be loud.
Yes, agree, TDP is a constraint for 9x5xHX which FW likely would not be able to overcome
Edit: Seems like this is true for any chip with the HX suffix. E.g. 7945HX has the same min TDP of 55W
Well, at least able to do 15W output on more than one port at a time would be good. So you can plug in 2 SSDs at a time and copy between them without them randomly being disconnected.
The current FW16 can only output 15W to one port at a time.
(sorry to hog the thread, everyone, I promise this is my last message on this)
Just learned that 7945HX was released in Feb 2023. Since Laptop 16 doesn’t have an SKU with that chip, it’s probably safe to say that 9xxxHX chips are also out of the question.
AMD announcement May 21st at Computex?
AMD Radeon RX 9000M mobile RDNA4 rumored specs: Radeon RX 9080M with 4096 cores and 16GB memory
https://videocardz.com/newz/amd-radeon-rx-9000m-mobile-rdna4-rumored-specs-radeon-rx-9080m-with-4096-cores-and-16gb-memory
I hope, this GPU generation makes it into the FW16!
Ah come on amd just because nvidia does this stuff doesn’t mean you have to too, that 9080M looks like a desktop 9070xt probably just with massively limited power target. This “just slap the lower sku in a laptop and call it the higher one M” is just sad. This time even worse since there isn’t even a desktop 80.
If these “leaks” are true my guess is the framework would get the 9070M/S at most
Yep, I think, if we haven’t seen 7945HX as a CPU option 2 years most likely due to thermal envelope, it’s probably the same story with regards to why 7700S is the card we got as well. In which case, the GPU with the comparable TDP is indeed 9070S
I think there are much bigger reasons there were no xx45 versions in the 16 than just thermals (garbage igpu and lack of onboard usb4 as well as much worse idle power).
9070S would still be pretty baller though.
Is RDNA 3→4 a big upgrade? I know they increased AI/RT side of the chip 2x, but what about traditional rasterization?
Looks like it has much better performance per W especially at lower tdps which sounds like a pretty good thing for a mobile gpu.
As an owner of a 7900 xtx I know power efficiency was not the main driver for dedicated rdna3 XD
Judging by what I’ve seen on efficiency improvements of n4p from 6nm finfet that 7x00M was based on, as well as power draws of the navi 44 die compared to the navi 33 die (same number of cores, rops, cache). While the stated amount of improvement from n6p to n4p is 40%, thats TSMC’s numbers which I don’t really trust, so I cut it in half because that what aligns with the difference between navi 44 max TDP and navi 33 max TDP. Note that all of these assumptions are made with the assumption that AMD won’t be trying to push clocks to a higher degree than they did on desktop from 7000m to 9000m.
My best guess would be that the 9080 would draw about 150-180w and would be out of the question for the fw16, although the theoretical maximum gpu wattage is 219w, it’s an unreasonable ask in terms of battery life and charging unless a 240w charger and complete heatsink redesign comes out with it.
I estimate that the 9070M XT would draw somewhere around 120-150w, which could theoretically make it possible if FW wanted to do it, but again, it would need a 240w charger.
I’d guess that the 9070M would be 100-120w which could make it a drop-in replacement and the 9070s would be 75-100w.
I think it would be nice to see some lower end sku’s to lower the cost of entry as well as improve battery life without sacrificing graphics completely.
I’d be guessing that the 9060M would have a TDP of 60-75W and the 9060S would have a TDP of 40-60W.
I’m hoping for a lighter 9050m/9050s with something close to a 35w tdp and 6gb to not hit too hard on the battery for video editing. Granted, I know that 8gb is seen as the bare minimum these days, but for the right price it would be a real shot in the arm for people using on battery.
At least AMD adds a M to the name to indicate it’s a Mobile GPU and therefore less powerful than the desktop ones. Ngreedia always use the exact same name as the desktop part to try and pretend that it has the same power as a 80 or 90 class desktop GPU.
I doubt a framework dGPU with less than 16GB will be well received.
8GB was acceptable in 2019 / 2020 and 12GB was fine in 2021 / 2022. Today if you want a GPU to still be usable 3 years from now, you need 16GB.
I think it’s a given that there will be a 240W charger offered by framework when they announce the new FW16.
I would be happy with a 150W dGPU… I don’t need more that that. However I would not purchase a card that max out at 120W.
What I want is a dGPU that has been optimized for efficiency at 150W and that means the RX 9070 cut to the right number of CU to be power efficient. And of course it needs to have 16GB of memory.
The RX 9060 would be acceptable in term of power level but it is designed to be pushed to the max in the desktop card so this would never work in a laptop. 32 CU is just not enough.
It depends on the price. I think a 12gb card around $500 ($400 for the GPU/heatsink, 100 for expansion bay module) and an 8gb card around $300 (200 for gpu/heatsink, 100 for expansion bay module) and a 6gb card around $200 ($100 for gpu/heatsink, 100 for expansion bay module) would work.
No, this would be a battery life nightmare and framework wouldn’t do it.
Do you want bad battery life? Wattage isn’t important, performance is.
It’s cool that you want that but the enclosure just doesn’t have the thermal headroom to make it happen. Maybe 120w could work but it’s just not going to happen with nvidia being extremely controlling and Intel not releasing arc battlemage mobile cards.
objectively untrue. 32 CU is not enough at a price point over $200 (on desktop, mobile is a little different.
With the current module size. How about they make it bigger? In Depth and Thickness? The 7700S Module already is bigger than the standard fan module, so the 9070M XT Module may be bigger than that of the 7700S. We don’t have the same boundaries of the USB-modules in size and nobody said, the new GPU has to fit inside the old GPU chassis.
The FW16 without dGPU works very well with a 90W charger. This means that with a 240W charger it is possible to have a dGPU with a 150W power level.
I agree that performance is what is needed. I guess it all depends on the needs and use cases.
I don’t need a powerful GPU when I am running on battery… the iGPU is enough. I don’t care about the battery when I am using a dGPU. I am always plugged in when I do that … so having a dGPU that use 150W is not an issue.
The current enclosure was designed for the current dGPU. If framework offers a new dGPU I am sure they can design an enclosure compatible with the FW16 and with enough thermal headroom to make it work. That’s precisely the reason why they designed the enclosure as a separate part and not just a chassis to contain everything.
The rest of your answers are all about the price. I agree that there are different prices that would work and also different needs.
Ideally they could have several options: a 120W dGPU with 12 GB at a price level comparable with the current option and then a 150W dGPU with 16GB at a higher price.
That’s the beauty of the solution: it offers flexibility to address different needs.
I am just not sure Framework is ready to go that far right now.
its more about battery life. Its bad for the battery to discharge so quickly and with a GPU there isn’t much space for another battery.
Please consider that many people are using this as a portable workstation and that your use case is not the be-all end-all usage of this laptop.
Uh, no. They are intentionally trying to standardize the expansion cards so you can upgrade the GPU without buying a new expansion bay.
I don’t care about battery life when pegging dGPU. I want maximum powah while plugged in. (With reasonable fan noise as a bonus.) If you want a 35W dGPU why can’t I get a 150W one?
Edit: disabling or heavily throttling the dGPU when on battery is fine by me. I anyway never launch a game when on battery, so my 7700S remains dormant. And a downclocked bigger chip will still give better performance for the same watts because clocks don’t scale linearly with power.
this is true. What I think framework tries to (and succeeds in) avoiding is power starving chips (with cooling at the very least) like many laptop manufacturers do, where a 4060 mobile can be better than a 4080 mobile if the 4060 mobile has more cooling/power available to it.