Desperately Missing

@lhl

Thanks for the info and correction, nice to have some more insight into this topic!

I think it’s pretty obvious that, now that AMD CPU aka Ryzen mindshare is basically no longer an issue, they’re trying to solve their AMD GPU aka Radeon mindshare issue, and this is especially useful when they can use their CPU mindshare as a sort of “trojan horse” to then do their aforementioned CPU+dGPU bundling and get their foot in the door with dGPUs, especially when they currently have an efficiency advantage on both the CPU and GPU side vs Intel and Nvidia! (and of course, efficiency is much more important in the laptop space than the desktop space)

I mean, even in the more enthusiast market and not just “normies”, Nvidia mindshare is kind of insane. As stated by people like Tom of Moore’s Law is Dead, you’ll have people in real life asking him about the Geforce RTX 4000 series (which hasn’t even been acknowledged by Nvidia publically) and yet, when Tom asked the person if they’ve any interest in the Radeon RX 7000 series, the person in question didn’t even know what he was referring to.

And for those that haven’t been paying attention to the rumor mill, there’s a very real chance that Radeon RX 7000 series is going to basically do a clean sweep against Nvidia since AMD is going to be using chiplets for Navi 31 and 32 while Nvidia will still be using monolithic.

…and if Nvidia doesn’t figure out chiplets by the time Radeon RX 8000 launches, I think we can safely say they’ll be in big trouble since, much like server CPUs, GPUs also easily scale with “moar cores!”, and we know how much of an advantage that is providing AMD in the server market vs Intel.

DISCLAIMER: it’s possible that AMD wanting to solve their GPU mindshare issue isn’t actually obvious and I simply cannot remember what my stance was before Tom of Moore’s Law is Dead basically put out a video stating that solving their GPU mindshare issue is one of the next big goals beginning with the Radeon RX 7000 series and they’re currently laying the groundwork on the software side of things with the likes of the recently-released FSR 2.0, improved h.264 hardware encoding quality, and AMD Noise Suppression as a counter to existing Nvidia software technologies that people like to point out as an advantage in favor of Nvidia, the latter two being particularly a thing with the game-streaming crowd.

And for those that are wondering, CUDA and/or GPU-accelerated Physx has been basically dead in the gaming world for a while now, so CUDA’s entrenched-ness isn’t really a concern in that market. Also my personal theory is that AMD is looking to use CPU-based accelerator on future CPUs to counter-act the likes of CUDA in the long-term (see also: their Xilinx acquisition), especially since Intel is also going that route.

BTW, for those interested in a bit more color on the topic, apparently there was an extended discussion on this very topic (I haven’t heard it but will queue it up) on a recent Broken Silicon podcast. Here are some excerpts highlighted by Tom from XMG (who has been refreshingly transparent in communications w/ their community, linked because transcripts): https://www.reddit.com/r/XMG_gg/comments/wc0fhh/comment/iijebtu/

1 Like

I’ll just tack this on here since it’s related, but I recently stumbled upon this extended discussion w/ Robert Hallock, long-time Technical Marketing guy at AMD (who refreshingly, really knows his stuff), where he talked at length about Ryzen 6000 mobile: AMD tell KitGuru why they beat Intel's Hybrid Approach 💻 - YouTube

A tidbits that caught my interest:

  • At about 13:00, a discussion on some of the partner/co-design process and timeline required for laptops (1y+, this is similar to some of the things Frank Azor has talked about in the past) and the market release cadence.
  • A mention of AMD’s new Platform Management Framework (PMF), which is AMD’s version of Intel’s DPTF. A bit more info here: AMD Developing "PMF" Linux Driver For Better Desktop/Laptop User Experience - Phoronix
  • 20:40 - interesting, that despite the assumption that LPDDR is faster due to higher MT rates, Hallock stated that D5 (Wide I/O slotted SODIMMs) is better for certain use cases like gaming due to lower latency (note, this would be for the CPU I presume, not GPU, where the extra bandwidth matters more: AMD Radeon 680M iGPU: DDR5-4800 vs LPDDR5-6400 Gaming Performance Comparison - YouTube)
  • 93% of the market buys a notebook that is 18mm or less (ultrathin/thin and light)
2 Likes

Dear all,
thanks contributing to have a vivid discussion here.
Meanwhile I compared amongst my friends and interviewed them plus I compared with my old equipment and my expectations.
Results:

  1. An external GPU connected to whatever interface is definitely no(!) option for a mobile device.
  2. All additional non-external GPUs are outmatching CPUs with internal/‘embedded’ GPUs such as e.g. Intel Iris Xe especially in regards to rendering, video processing etc. tasks.
  3. I don’t care (now) about the CPU manufacturer. Intel or AMD I don’t care but I need the option to order with an additional dedicated GPU.
    That’s it.
    So, as I’m old and wise ( :slight_smile: )I will sit back and wait for this. If not available within the next 2 or 3 years I will by a non Framework product (even if this will ‘hurt’ me), but, in my point of view, sustainability doesn’t and should not mean to be happy with what you can get at the moment but that currently(!) available products can be replaced with a better approach.
    Good Night, and Good Luck
    Gerhard

Dear all,
I bow my head. Now they did it. Even if at the moment the GPU expansion card is only foreseen for the 16 Inch model, all my requirements are met.
My next notebook, for sure, will be a framework DIY model.
And… if the GPU expansion card is working for the 13 Inch model I will sing Halleluja.
Now I will start saving money to buy a framework laptop by the end of 2023.
I’m not an early adopter anymore so I will wait for the feedback…
regards
Gerhard from Germany

3 Likes