Why Framework is the best laptop manufacturing company - and why more VRAM should come next

Considering the latest product update on the Framework 16, I have to say that I am truly impressed. They have made up on basically all of the important issues. Nvidia is on board. They also massively improved communication and transparency. Even price reductions were done and it all makes sense (price reduction at new iteration of a product). I am now entirely convinced by the company and don’t say that lightly.

So this thread is about proposing some priorities for the future, and I would like to start with one which is about the GPU.

Now, I believe Framework staff is extremely tired of hearing this, but I wanted to discuss sustainability as an objective of Framework. I have to admit that I don’t understand it. Why is it that Framework is integrating mid-tier GPUs in their notebook, even limited to 8GB of VRAM, if they are about sustainability? The one thing that is going to be useless soon and in a landfill is a 8GB VRAM GPU. It does not make sense.

I know that Framework did explain the reasons for it in the Q&A video. So the reason is “we didn’t want the module to be bigger”. That is fair enough. But I would like to stress here in terms of consumer feedback that in all honesty, Framework should please not underestimate this issue. It has been brought up a LOT, it was also basically the first content question in the Q&A. People were burning to know the answer.

So the point of this thread is to emphasise this issue as a priority for the future. People would like to have sustainable GPUs - GPUs that are useful in 5 years.

I think more VRAM is the last major piece missing in the puzzle.

8 Likes

Couldn’t agree more. 8GB is beginning to be tight even for gaming, and for any serious ML/AI work it’s just pitifully inadequate.

After initial enthusiasm and preordering the updated FW16, i’m now considering pulling out and going back to “regular” brands. What’s the point of sustainability and upgradability if there are no suitable upgrades?

1 Like

I could be making false assumptions here, but I believe they’re limited to a system draw of 240W. My guess is that the GPUs that fit within the power budget are the midtier ones, which have a manufacturer prescribed vram amount.

From a financial standpoint, if you only have the resources to develop 1 GPU, the midtier is probably the safest bet as well.

Just my guesses.

1 Like

Yeah, no.
Asus Zephyrus has 5090 as an option, and comes throttled to way way below 240W.
Same thing with Razer’s 16, Acer’s 16, and many others.

1 Like

Are you talking about GPU power, or total-system-power? I’m talking about total-system-power and what the USB-C connection is rated to provide.

1 Like

You could always supply the GPU separately, with USB-C or even a dedicated barrel connector. But then, thermal management and heat dissipation will become a real challenge. External GPU with Oculink or similar is the way to go for that kind of power-hungry computing.

1 Like

What’s the distinction?
They limit both GPU power and CPU power to stay under the limit.

You were talking about GPU power limit though, so let’s focus on that.
Asus Zephyrus G16 is limited to 120W across all GPU models (5060-5090).
Razer 16 is limited to 165W on 5090.
Acer Helios 16 is limited to 175W on 5090.

Each of these manufacturers has configured their VBIOS to limit the TDP of the GPU so the total system power consumption stays under the spec for their laptop. The range is fairly wide.

Power is a configurable item.
And manufacturers are well aware that VRAM is the commodity here, not the actual number of tensor cores. Hence, in a lot of cases, a move from 5070 to 5090, or even moreso 5080 to 5090 is not about framerates, it’s about available VRAM.

Finally, i’d like to draw attention to the fact Asus Zephyrus G16 is comparable to a Macbook Air in thickness and weight, yet can be configured with a 5090 with 24GB of VRAM. Technically, i’d get more mileage out of that laptop - despite it being completely NON-upgradeable, with soldered RAM, and only the two M.2 slots being user configurable - than with Framework’s choice of GPU at the moment.

8GB VRAM is not ideal, but if I can take Nirav’s word (and I have no reason to distrust him on this), they couldn’t feasibly release GPU with more then 8GB VRAM, at least with current Nvidia/AMD offerings. Framework is limited with expansion bay to use GPUs with 4 VRAM chips (due to sice concerns). Both AMD and Nvidia are currently using only 2GB modules (besides mobile 5090, but that’s out of the question for Framework 16 still, again, due to size and TDP), and they are strictly limiting available memory configurations. So even tho 3GB memory modules are available on the market, Framework (or anybody else for that matter) can’t use that as a custom solution. Is expected, that Nvidia will use 3GB modules on their mid gen refresh 5000 series cards, so you would be able to get 12GB VRAM on GPU with just 4 memory chips, but I wouldn’t hold my breath for those GPUs to be available on Framework 16 any time soon, if at all.

The distinction is that, with a 100W dGPU (the RX 7700s), the FW 16 is already maxed out on its total system power draw budget of 240W.

It appears I have not made clear what I was trying to say. The Framework 16 is limited to a 240W total system draw power budget by the usb-c power connection (assuming you exclude temporary battery boosted power draw). Even the original FW 16 model with a Ryzen 7 7840HS and RX 7700s was capable of fully utilizing that 240W power budget, which is why people have been greatly looking forward to the 240W PSU rather than the 180W the device released with.

Obviously, I would love to see a GPU with more vram made available, I’m a data-scientist by day and would certainly be able to make use of as much vram as they could fit. Lacking a power budget to run a more powerful GPU though, I’m hesitant to believe that it would net a performance gain beyond the benefits of additional vram alone and the card would risk entering the same territory as the RTX 3080 series laptops that were outperformed in framerate by RTX 3070 laptops because of insufficient heat dissipation (which is why I bought a Razer Blade 15 w/ RTX 3070 instead of RTXZ 3080 at the time).

You may come to a different conclusion with that same information; I just wanted to make sure that what I was trying to say was understood.

1 Like

Wholeheartedly agree. Even a step up to 12gb of VRAM would have made investing $700 in a GPU upgrade feel a LOT better. I think the 5070 will provide a lot of tangible benefits over the 7700s, which at times can feel like it’s more trouble than it’s worth, but I also appreciate the challenges that the team has expressed. It’s hard to imagine that a 5070ti constrained to the same power budget wouldn’t have been doable. I know Nirav mentioned a physical limitation and the expansion bay getting too tall, but like… that’s the whole point of this system, right? If it’s always going to be limited to lower mid tier cards without a willingness to push the boundaries, it seems far less interesting.

I also wonder if cost was a factor. Like if a 5070 is $700, I’d imagine a 5090 (with severely limited power/performance) could be upwards of $1,600+. I’m sure a lot of us would be willing to shell out $1,000 for a 5070ti, but I bet it would still be a relatively low percentage of customers.

Every laptop is basically a list of compromises and the GPU bottleneck continues to be towards the top of the FW16’s list (at least for how I use mine). I’ve had a lot of other machines in this class (including a G16) and make no mistake, they all have their list. I still think the FW16 is the most compelling option. We just have to hope that the platform continues to evolve and more hardware choices arrive.

Edit: I definitely don’t want to seem like an ingrate and am very happy that we now have a 5070 option. We totally could have been served up a 5060 and I think VRAM limitations aside, the 5070 is going to make this machine much more capable and well balanced.

What you’ve said makes sense to me. It’s also possible that Framework is trying to set themselves up for success to do something in the future. Their approach so far has been a lot of “do a simple smaller scale version of what we really want to do first, and then do the full thing”, which is why they said they came out with the Framework 13 before the 16. With the RTX 5070 they added support for bidirectional power, which could in theory open the door for a separate GPU power budget in the future. For a first gen nvidia GPU though, I totally understand why they went the direction they did. If we’re lucky, they’ll take the learnings from that and come out with a real big boy at some point in the future, it’s just not in the cards right now.

1 Like

IMHO the biggest problem is cooling. More power with extra Power Supply would be possible, but you need to cool the beast. :flexed_biceps:t2:

Not quite. With GPU and CPU at full tilt, you can still charge the battery and a couple of USB-C devices. Power budget in 16 is actually overprovisioned vs. the preset TDP. There’s room.

Even if there wasn’t - just limit the 5090 to 100W. Problem solved.

I don’t really care about the performance gain. And if you look at other laptop makers’ offerings - they don’t either. It’s a VRAM game, plain and simple, with minimal performance gains from 5070 to 5080, and nearly no gains from 5080 to 5090. Guess what - 5090 models still sell.

Don’t get me wrong, i’d be happy with a 5080 too, 16GB is still better than 8 :slight_smile:

No, i get you, sorry if i sounded aggressive.
It’s just that i don’t really care for performance of higher tier laptop GPUs as much as the VRAM - and unfortunately, Nvidia decided to bundle them together.

Again - most slim 16” 5090 laptops have VBIOS with a hard power limit set. Some of them as low as 120W. You can’t convince me this is somehow not a problem for everyone except Framework users. Also, i’m fairly sure there’s 20W of margin in the FW16’s power budget.

Or let me put it this way - if it works for a laptop the thickness and weight of a Macbook Air, and it’s not self combusting, and it’s running at 120W, and people are buying it, there’s nothing you can say that will convince me this is somehow impossible to implement in a Framework.

I mean, if you’re not concerned about the raw gpu performance gains, and thus power budget, then yeah I think a GPU with more vram definitely makes sense.

I never said it’s impossible. I just said that cooling will be the biggest problem.

1 Like