It would be nice to have the option.
Do you think the 5090 would like to be limited to 100W?
the odds are near 0%, The pcie Interface is limited to pcie 4x8 on the fw16 interposer and the maximum power with the framework charger is below 100w sustained.
Also for Nvidia Parts to be in an Upgradeable Laptop is, well near Zero. As the Framework is an AMD Advantage Design, the NExt Generation GPU will be an AMD GPU again, but we have to wait and see what AMD will provide in the next Generations uplift.
edit:
And Framework wants to be opensource/Linux compatible, sooooo naah no Nvidia at any Chance
I heard that nvidia licensing prevents a company like Framework making their own PCB form factor with a nvidia gpu chip on it.
I think Nirav, Framework’s CEO, mentioned it in a video some time ago.
But, please don’t take this as fact, because I cannot find a link to the video in question, that would provide proof of my comment above.
If anyone else can find the video, please post a URL here.
Where is this cited? since it’s a 180W charger, I’ve been assuming it’s 180W sustained. This this not true?
Well, you do need to save some power for the rest of the laptop.
But now that 240W PD power supplies are available, we are no longer limited to the 180W one. I think the bigger issue, in a laptop, is cooling. Every watt of power needs cooling. Want to run 180W, you need to be able to get rid of 180W of heat, and fast enough. People sometimes overlook the cooling side.
But on the topic of the 5090 or any Nvidia GPU, not a chance, as others have said. Nvidia won’t allow it. They view it as threat, cannibalizing desktop GPU sales.
It would be cool to have an oculink adapter module so we could use that card externally.
If you haven’t seen, a forum member has been working on making one.
The FW16 is not able to sustain its batterycharge with the Stock Charger, when the GPU is pulling 100w. When in max performance and the System is fully utilized its pulling 15-25w from the Battery. Thats the Issue. in Balanced the GPU is maximum Pulling 70w than its holding its Charge.
@MJ1 Well thgere is the 240w Delta Charger, i have one, but you must calculate with the Parts Framework provides.
Sorry, not sure I follow. Why must you?
Does the stock 180W charger overheat and is unable to provide 180W sustained, or is the FW 16 currently unable to dissipate 180W of heat fast enough to sustainably pull it from the charger?
No its pulling the full 180w, but 180w into the Laptop is not 180w Performance for CPU and GPU. You need to Power the Screen, Ram SSD and every peripheral.The CPU can pull 65w peak burst 54w sustained and the current GPU 120w Peak Burst 100w sustained. The GPU never exceeds 75C with 100w sustained, and there is enough Space to put a better Cooling Solution into the dGPU Module but it is sufficient for the current gpu. The Interposer is able to withstand up to 10A at 20V so it is possible to get a 200W Peakburst GPU but it is limited by the pcie 4x8 Interface so it Framework 16 Connectors Deepdive The 7700s is the current best available AMD GPU for this Interface. There is a 7800m circulating as a OneXGPU2 and on Paper its the same GPU as in the PS5 Pro)with a 180w TGP It is a possible Upgrade, but for now its a Paper Tiger. We will the see what the next AMD GPU Generation helds in the Future.
That makes a lot more sense, thanks for the clarification. That’s how I had been assuming it worked.
Once you have a dGPU, the PCIe speed is not so important because so much of the gameplay physics etc. tends to all be done on the GPU no, leaving not so much for the CPU to do.
See this example of a raspberry PI with a dGPU:
A better CPU obviously helps, and having only 8x PCIe lanes on the FW16 dGPU instead of the normal 16x PCIe might not have so much degradation as one might think.
For example, when diagnosing a crash problem recently, I dumped the shaders off the dGPU, there were 30 Million different shaders dumped. They were doing all sorts of things in addition to processing image data.
Bandwidth does make a measurable difference, but I agree that many people overestimate how large this difference is.
Well i run an RX6800xt over a pcie4x4 - USB4 40Gbit eGPU Interface on my Framework, thats about 10-15% Hit on the Performance and a bigger hit on 1% lows when it hits bandwith limitstions.
But a Manufacturer will not pair a GPU over an Interface thats less than the GPU Manufacturer sets for their GPU. Framework will only produce dGPU Modules where the GPU Manufacturer defines a pcie 4x8 Interface
Edit: i am Rank 7 Worldwide on Timespy with my R7 7840hs and RX 6800xt Combo
Nice! Best I’ve done is hitting #105 back when the 3090 came out.