-
See edit. I wish the mods hadn’t moved my post over in its entirety without having asked me first :/
-
Fair enough about most people getting the laptop later. I just add that date to contextualize things a bit. Those receiving their devices later should prolly save their expansion bay for later graphics card offerings.
-
I’m sorry, but I don’t see your point when it comes to high-end and professional use. With laptops, you pay for the portability and convenience. If you need performance for 2000 [currency units], you’re much better served by a desktop. Let me be clear: if the FW16 doesn’t have enough performance for you, don’t buy it. Buy a workstation or an eGPU instead.
-
Yeah, Nvidia gonna Nvidia. Corporate greed is a hell of a drug. Gamers seem to love supporting it, though!
-
Again, there are plenty of games that can run into an insufficient VRAM scenario without crashing. Far Cry is owned by Ubisoft, which is known for its incompetence. Even then, I wouldn’t give them too much crap for it – you’re modifying game files in a way the developers did not intend. Are you really so surprised to encounter crashes? The fact that it involved VRAM is almost irrelevant considering the heart of the issue here.
Please see my edit. I don’t want the FW16 to be designed for ML stuff, and I don’t think the people who have already sold out 12 batches of it care for ML that much, either!
Yeah, it’s pretty disappointing to see how far we’ve fallen in comparison to the early days of computing. Apollo engineers got us to the moon and back with what amounted to a fucking pocket calculator, and modern gamedevs can’t draw pixels to a screen without 17 layers of bloated game engine cruft.
Best take in the thread.
There is no Navi 33 based GPU with more than 8GB. Even the Workstation cards. You can’t have what AMD doesn’t have.
If you cannot connect those two bits then you have never been involved in design and manufacture of an electronics product.
What was available back then would have been in the selection list for a new product, not the items that come to market a month or two before delivery. Decisions have to be made as to what to use, arrange manufacturing and delivery contracts involving batch sizes/delivery numbers per month, and the item chosen designed into the end product with a considerable amount of regression testing over the development and testing period.
hhaahaha…the truth.
Why 100W?
The expansion bay connector is designed for 210W on the 20v line (plus 28.1w on other lines) and the expansion bay itself was designed to allow the GPU module to be as big as it needed to be to allow for adequate cooling. So Framework potentially could have offered the AMD RX 6800M (which has 12 GB VRAM and a TDP of 145w).
Interesting perspective.
So, if a card with more VRAM was also offered - we see many video card options available for the desktop, for example - you would be “stuck paying” for that option even if you didn’t want it?
For me - and I really didn’t get an impression I was an exception here - the major appeal of the FW16 design is in its modularity and the potential for having different options for components like the dGPU. How is asking for more options puts those who don’t need them at a disadvantage?
That’s all right that you personally don’t want FW16 to be designed for ML (although I am not quite sure what part of such design you would specifically object to - arguably, FW16’s design is fully compatible with ML already). But, as someone who is a part of the 12 sold out batches, I do want the option of not only doing ML, but any other work that can benefit from GPU-based computations and large VRAM. And I want to be able to do that untethered from either an eGPU or the cloud. Is that not all right as well?
The main point here was simply that the discussion shouldn’t be limited to gaming needs. Whether one, as an individual, shares those other needs or not.
My apologies; I have been unclear. I’m completely happy that FW16 is modular, and I completely support the concept of having different expansion bay options for different use cases.
My responses were mainly addressing this idea that a card with an entire 8 GB of VRAM is a waste of time/money for a laptop. Some in this thread are acting as if any graphics card that comes out with less than 16 GB of VRAM is worthless, despite the fact that 8 GB of VRAM is more than enough for plenty of use cases.
With that in mind, I don’t want to be stuck paying for a single 16 GB offering, but I’m more than happy if they come out with an additional 16 GB offering. This response sums up my feelings quite nicely.
If AMD or Nvidia were to announce tomorrow that they’re working on a 48 GB expansion bay option, I’d be 1) very confused as to why they’d cannibalize their enterprise options, but 2) very happy for those in this thread that are so interested.
At the end of the day, everyone in this thread (should) have the same underlying goal: a repairable, modular laptop that can cover different use cases and stay out of the landfill for as long as possible.
Well said.
Hi, thanks for the replies.
On 3: yes, desktops are better, still pursuing the dream of laptop and EGPU.
On 4: I opted out and won’t upgrade at all until Nvidia somehow wakes up, or AMD saves us ;).
On 5: I agree, in fact, I was surprised I could even run it at all. I am not surprised it crashed, it was just to make the link with missing Vram which I think was the reason.
This is my opinion by the letter. 8gb of VRam is solid for a lot of applications and games, a lot. But some enthusiasts (like me) wish for more in laptops ;). The reasons are turning on HD textures, upscaling in-game resolution to 200%, and yes, raytracing (although more sceptical as some have said).
Also, there is the RX 6850M XT from 2022; it was in the Legion 7 2022 and was quite competitive with the RTX 3080 mobile GPU in some games like Red Dead Redemption 2, where it was the fastest laptop GPU until the RTX 4080 and 4090 mobile over took it.
It had 12 GB of vRAM; still not amazing, but far more future proof than 8 GB.
Given that there’s only one mobile Navi 31 based model so far, odds are that a 6850M replacement could be on the way with more than 8GB VRAM.
Don’t think we really necessarily need 16gb, but 12gb would have already been so amazing…
Not directly addressing anyone in the thread, but just adding another POV; I ordered the FW16 not planning to run any ML models on the laptop itself, but rather on the workstation I have specifically for that. I’m planning on running the FW16 mainly with the spacer/fan board in place, but ordered both because the 7700S can probably manage some work-trip-light-gaming a lot better than the APU. There’s no technical reason FW can’t ship an RX7700 module at 200W and 12GB of VRAM post-launch, so if you’re really hard up for VRAM, that may still be an option down the road.
Or even better, with RDNA 4.0/Navi 4X coming next year, we might even get 16GB+ in the same thermal/power envelope that 7700S is in now. Personally, I’d love to drop ~600-700$ on a Navi 43 module from FW with 16GB, or if I’m really huffing the good stuff, some HBM.
The RX 7700S is based on the Navi 33 die with only a 128-bit memory bus. This only allows for 8 GB of memory if using a single side of the PCB or 16 GB if using both sides of the PCB in a clamshell configuration, which may be impossible to achieve in an expansion bay module.
If you want 12 GB of VRAM, you will need a GPU based on the Navi 32 die, which has a 256-bit bus that can optionally be cut down to 192 bits.
I know all of that. That’s why I said RX7700 (Navi 32) and not RX7700S (Navi 33). Now, if you’d pointed out that AMD doesn’t have any mobile Navi 32 parts, and only one token sorta-mobile Navi 31, I’d have to agree.
But there’s no reason AMD couldn’t package up Navi 32 in a mobile RX7750 or RX7800 and ship that with the full memory path.
Would be a perfect solution.
A 200W or 240W expansion bay module would require a second power adapter just for the expansion bay, and if Framework sticks with USB-PD it will require a 240W USB-PD supply that nobody appears to be making yet. That GPU also won’t be able to function at a meaningful level when you unplug the laptop and it has to get by on a power budget of 50W or so drawn from the laptop’s batteries, so realistically it would have to be turned off when it doesn’t have its dedicated supply plugged in.
That said, I think there is a market for that GPU if Framework decides to make it. It won’t happen until they have the laptop itself and the 7700S GPU in full production, but it’s a product we could see some time in 2024.
Another possible path would be a partnership with a company that makes GPUs, similar to the deal with Cooler Master for the mainboard case. (We found out in one of the recent blog posts that Framework was already working with Cooler Master on the CPU coolers in the laptops, so that deal didn’t come from out of the blue.) MSI and Gigabyte probably won’t be interested because they also make laptops, but a company like Powercolor might be.