No new information on 16 this time

Btw, an interesting fact is that sometimes configurations with more cores are more efficient than those with less cores, for the same performance, because having lower clocks can save more power than delivering energy to less cores does.

This mainly applies to high performance tasks and is usually the other way around in lower utilisation, however, in tandem with a kind of MUX switch, as in completely deactivating the dGPU in lower power scenarios (or with this laptop just taking the GPU out), having higher core count silicon, eg. a 9070 downpowered to 9060 XT wattage/both to 120W, might be more power efficient under load.

Well, I guess we do have new information on the Framework 16 :). A couple of thoughts and questions from my side.

Firstly, one has to say it, huge congratulations to the company for actually making upgradeable graphics happen - wow, what a milestone in the laptop industry.

Then, second point, I guess it just is what it is regarding gaming performance. Framework is not Razer, Razer customers will always have a 5090 option in the same form factor, while framework doesn’t seem to have a high-end objective - which is fair. But they actually are in touch with Nvidia, which means there is “hope” ;). I maintain that 8gb of VRAM is not helpful for “sustainability”, but that’s another debate.

Love the other innovations, 240w charger - wow.

A couple of questions:

If I purchase the new mainboard but keep gaming with an external GPU via OCuLink - how much would the new CPUs (the higher end one) increase my fps? Any ideas? (using RTX 3090 via OCuLink 4i).

How is the 240w charger useful? Can someone explain to me? If the GPU is 100W and the CPU 45W, who needs it? I am certainly missing something, but I am just not sure why Framework delivers a 240w charger, but not a 175w graphics module ;). What is it for? Just to charge the laptop a bit faster? Who would carry around a brick just to charge a bit faster?

1 Like

I had the possibly-mistaken impression that that was a Nvidia requirement. If Nvidia allowed more VRAM, they might not sell as many more-expensive desktop cards. :roll_eyes:

3 Likes

Right now, if you play a demanding game on Balanced or Performance mode, the battery drains. A 180W charger doesn’t quite deliver 180W, it’s a bit lower, and then there are efficiency losses, and where Nirav was saying “sustained”, that’s not to say that the parts don’t pull a bit more. And until now, the computer market at large hasn’t supplied a 240W charger that actually plays nice with the Framework 16. Some power-related issue somehow causes stuttering in games as far as I’ve read (and I’ve seen it first-hand with most of the 140W adapters I’ve tried personally).

So a first-party 240W charger will be great.

4 Likes

180 is what it pulls from the wall, the laptop only gets about 145/150 if i recall

1 Like

A 180W adapter should delivery 180W to the laptop - a good one will allow for a few % over before cutoff, like 185W for example. But, voltage may “droop” a bit at high current draw, over the cable, or just in the power conversion circuits of the adapter. Here’s some low-level testing of the Framework 180W adapter show it hitting the full 180W on the output: https://www.youtube.com/watch?v=w10htntCKow (it also introduces a bunch of other ideas of how to think about the performance and behavior of power adapters)

But consider that the power adapter needs to potentially supply the peak possible GPU power, the peak possible CPU power, and charge the battery, at the same time. The GPU and CPU power may spike very quickly, too fast for the system to adapt the budgets for the different components. Also, the firmware needs to account for power adapters that don’t quite meet their specs (which are very common and users expect them to work), by slowly ramping-up power usage and backing-off if voltage-droop is detected.

And finally, the laptop firmware may just be a bit buggy and not very smart. @James3 has published his own fork of the EC firmware which fixes stuttering on 140W adapters and similar issues. It’s not particularly easy to install, but see A call on 240w adapter - #396 by James3

6 Likes

Thanks for the detailed correction. I thought someone had measured and from the “laptop” end it was only pushing 150w. But, lots of threads - I am most likely wrong :slight_smile:

The FW 180W PSU publishes (via USB PD CC messages) that is can peak to 240W for 1ms.

Even though the CPU says it uses 45W and the GPU 100W, they can peak to far higher than that.

For example, a FW16 has bean measured peaking at 450W for less that 1ms. The FW16 takes a sip of power from the battery during these very short peaks.

2 Likes

I’ve never seen 180W in any of the USB-C hardware power monitor things I’ve plugged into the laptop… chalking that up to conservative behavior laptop-side is probably right I guess, though. (I wouldn’t know.)

If you were wrong, I don’t think you were very wrong.

I love that guy.

1 Like

You aren’t wrong. I can only get the 180 psu to max at around 163w when trying to push it. And if I launch a demanding game (Helldivers 2 for example) which pushes the gpu to 100% and cpu to around 75%, then launch stress to max out the cpu as well, it will just throttle everything down to about 60w which is pretty ridiculous.

And when playing heavy games for hours, even with the psu “maxing” out at 164w, the battery slowly goes down and that psu will become too hot to touch.

Some here may tell you it is “designed” for this or that, but unless they have the benchmarks and testing to prove it, take it with a grain of salt.

Time to update the title :relieved_face:

14 Likes

Guys, the context is here. :slight_smile:

1 Like

Nvidia would need support of 24Gib (in other words 3GB) modules or clam-shell/double stacking modules with the 5070 and then it’s mostly about drivers.

The problem can also be “solved” on the hardware side (but would still need driver support), but that would mean a complete redesign or using a better chip beforehand which hurts their margins: The 5070M has a “physical” memory interface of 128 bits, and each memory module uses 32 bits, so max are 4 chips. If someone wanted to use more chips, they could only stack another 4 chips on the other side of the PCB to use the same bus, which is both expensive and bad for compact devices. Else Nvidia would have to give the card a wider memory interface, for example 192 (6 chips) or even 256 (8 chips), but as stated before that’d mean a costly (millions of dollars in R&D) redesign.

Anyone else having USB issues? If you for example 3d print, and push back and forth the same SD card. Or do embedded development, and it goes on off on of…

With AMD stuff it doesn’t work good. It goes slower and slower in each detachment, no matter sw or hw.

Not impressed.

welp, that aged well

2 Likes

Never trust random person on internet. :stuck_out_tongue:

4 Likes

Regarding how Framework adapted the design for the reduced number of PCIe lanes available:

Apparently in the first generation Framework Laptop 16, the m.2 2280 and m.2 2230 SSD slots both provide PCIe 4.0 x4 NVMe (x4 = 4 lanes each).

In the second generation Framework Laptop 16, the m.2 2230 slot provides PCIe 4.0 x2 NVMe (2 lanes).

Note: The secondary storage interface supports x2 PCIe 4.0. SSDs with x4 PCIe 4.0 will run at slower speed. [https://frame.work/laptop16?tab=specs ]

6 Likes

Unfortunately, yes. I posted this somewhere else in these forums for a couple other folks as well. I was a bit disappointed to learn that aspect as I use both the 2280 and 2230 slots on my current 7940 plus the expansion bay with an additional 2280. I plan to swap to the AI 370 when it drops (pre-ordered) and already swapped over to one larger 2280 and combined the drives since the 2230 will be reduced. Good thing I have the expansion bay so I can come to terms with the fact I still have additional storage. But I feel for the folks who will be using the dgpu and limited (internal anyway) because of such.

1 Like

I mean pcie4x2 is still almost 4GB/s would not call that the end of the world especially for a 2230 which tend to be on the slower side since they have to use less and denser flash. Still having the full with would have been nicer.

3 Likes

Yeah, I have had similar things told to me here…