Desperately Missing

Haha sorry, it wasn’t my intention to make a dig at AMD, I’m just saying that Framework probably can’t afford a complete mainboard redesign right now (not to mention that AMD is generally less helpful with these things to my understanding), and there’s really not much reason to demand a AMD board, and it seems to largely be from blind company preference (which is fine, just doesn’t make sense). Ryzen 6000 chips are also facing severe shortages to my understanding, so that sadly doesn’t make it a great choice however good the characteristics of the chips are.

If it’s true that they are in contact with AMD that’s great news though! (Although I haven’t seen anything of the sort)

2 Likes

Framework is listed as Intel’s partners. Considering their size, I don’t think they would be able to care about so many SKUs.

I would really like a Framework with better iGPUs (but I must say Iris XE is very good compared to UHD graphics) but I guess give them time. Else even an ARC dGPU inside would be nice.

2 Likes

I’m not sure about that. There were quite a few reports on the internet about why specific manufacturers, like Sony with the PlayStation put down their vote with AMD.

And yes, I recognize that in these instances it was a much higher volume and the question gravitated around custom chip design, but still, to me, it seem to indicate a fundamental willing to work with their customers and provide support / custom solutions.

I might be incorrect, and things might have also changed since then.

If my memory serves, AMD does custom designs (i.e. PS5) like NVIDIA except AMD is more responsive.

If we are talking about just getting the chip, Intel would probably be easier. Intel has much better documentation compared to AMD for drivers and such.

Interestingly, Apple got CPUs from Intel but GPUs (custom ones if I remember correctly) from AMD (yet not CPUs). But that could be because only Intel had Thunderbolt+Quicksync then.

1 Like

Yep, it has a track record like PlayStation, Xbox, Tesla mcu-4, and Intel has a track record of turning these down (there was an interview with an ex Intel CEO stating that he did turn down Sony, that’s why they went to AMD). It just feels like the corporate philosophy is to be open to players that seem to be small at the time.

As I said, I may be completely off the mark.

1 Like

I don’t think it is based on fanatical preference.

AMD continue to have the efficiency crown and were dominant in iGPU performance where Intel has caught up.

Better efficiency/battery life and better iGPU were at least my interests, a laptop that goes further without charging is the better laptop IMO.

1 Like

I disagree, intel is still massively behind in iGPUs. iris is a step from completely unacceptably bad to barely acceptable.

1 Like

In this class (for example 1165G7v5700U) the performance is similar from what I have seen but still tips to AMD. Intel is still worse overall but have become more competitive and “caught up” in that sense. The new RDNA2 iGPU is another leap Intel would need to try and catch up to. I just hope we have the best CPU/iGPUs to choose in future boards whoever makes them.

What is your definition of very good- it basically caught up to mobile Ryzen 5000 graphics… 6000 had a big leap forwards because they finally changed the architecture because Intel essentially forced their hand. Iris Xe is by NO means a bad iGPU if you compare it to others…

1 Like

~15fps on a 13 year old graphics benchmark is NOT good by my standards, put it that way.

Make sure that’s due to the actual hardware and not drivers. Intel’s Windows graphics drivers are a bit notorious (see also: Arc driver issues) and basically requires using Linux to actually get the most out of what the hardware can do (…which, keep in mind, really isn’t that since we’re talking about integrated graphics, whether Intel or AMD).

But also remember that dual-channel is basically a requirement for any integrated graphics whether Intel or AMD (with the except of the lowest SKUs that tend to be bottlenecked by their minimal shader counts).

I’ve mentioned it elsewhere on this forum but my friend has a Sandy Bridge laptop and the integrated graphics was giving him literally low single-digit frame rates in an eduke32 game mod yet running the exact same thing on Linux Mint 19.x gave like 40fps.

Also fun fact, Vulkan on Linux is available for Ivy Bridge and Haswell but for Vulkan on Windows you need Broadwell at a minimum.

Lastly, Anandtech actually just recently benched the iGPU for the 12900KS which has only 40% of the total shaders of the i5-1240P though is clocked 19% higher and is using DDR5 RAM (I’d argue that the settings used were a bit overly low however!:

Disclaimer: I’m typing this from a Ryzen 4800U.

Some benchmarks (averages of laptops tested with the hardware) to illustrate the similarity in iGPU from both. AMDs newest iGPU is another step up.

AMD last gen:

Intel last and current gen:

AMD current gen:

I was using linux and dual channel ram

All of what you listed are coming out of their Semi-Custom/Embedded teams, which got traction originally since AMD was willing/needed to trade margins for wafer volume for console chips, but which has ended up as one of their strong suits/strategic pillars that has borne lots of fruit - the Steam Deck is another awesome recent result there.

PC/Laptop are a totally different part of the company, and sadly, it’s also different story. Not necessarily because AMD doesn’t want to support smaller companies I think, but that Intel historically just has had a lot more resources (business, engineering, supply, and just straight up money) to do board support and co-design. Intel Evo (nee Project Athena) is one big public example of this (AMD has recently launched a competing “AMD Advantage” program now, mostly focused on gaming laptops that use AMD CPUs + GPUs). Also, AMD has (rightly) focused on a lot on a couple Tier 1s (Lenovo, Asus, HP) with supply and support at the expense of smaller partners. I think this difference can best be illustrated by a couple experiences that I know that XMG has had in the past that they’ve talked about:

  • With their original VIA 15 (Tong Fang PF5NU15 chassis), they tried for over 2 months to get a 4K OLED panel working w/ a Ryzen 4800H board but ran into unsolvable issues with the AMD iGPU (if I recall, TCON related) and just didn’t get it to work. This would have been one of the only HiDPI (and OLED) Renoir laptops w/ upgradable RAM and no dGPU, but it simply didn’t happen.
  • More recently, Tuxedo just released their refresh of the Tong Fang PF5NU15, the Pulse 15 Gen2, but it’s running a 5700U (not Ryzen 6000, not even a Zen3 5800U/H). Why? According to them:

First of all: Supply! Ryzen 6000 supply is very limited globally and our hardware producers only receive a very limited amount of Ryzen 6000 CPUs, which are not enough for different products. So they had to decide which products will get this Ryzen-H-supply and they decided to go for gaming laptops because it’s a bigger market compared to the rather niche category of non-dGPU laptops with high-end CPU.

Now, one could argue that Tong Fang is simply not a big enough ODM, but it’s worth noting that they’re not a tiny player. They’ve co-designed all of Intel’s gaming laptop chassis, and supply laptops for a huge number of boutique laptop OEMs around that world. There are Ryzen 5800U/H and Ryzen 6000 mini-PCs coming out in China now (not to mention all the Ryzen 6800U handhelds like the Ayas, GPDs, etc) so I have no idea how AMD have been prioritizing things there. At this point, where AMD is swimming in cash, has past Intel in market cap, and has excess wafer capacity, I really don’t know WTH they are doing. I would have bought a Ryzen 6000 if, 8 months post-launch, there was anything for me as a power user/developer (really, not the longest list of requirements: an upgrade from my 1st-gen PF5NU15, decent battery life and display, dual-SODIMM, works w/ Linux), but there simply isn’t.

Maybe they’ll get serious w/ Ryzen 7000, but from Linus’ callout in his update video, it sounds like AMD just doesn’t care enough, which is too bad.

4 Likes

@lhl

Thanks for the info and correction, nice to have some more insight into this topic!

I think it’s pretty obvious that, now that AMD CPU aka Ryzen mindshare is basically no longer an issue, they’re trying to solve their AMD GPU aka Radeon mindshare issue, and this is especially useful when they can use their CPU mindshare as a sort of “trojan horse” to then do their aforementioned CPU+dGPU bundling and get their foot in the door with dGPUs, especially when they currently have an efficiency advantage on both the CPU and GPU side vs Intel and Nvidia! (and of course, efficiency is much more important in the laptop space than the desktop space)

I mean, even in the more enthusiast market and not just “normies”, Nvidia mindshare is kind of insane. As stated by people like Tom of Moore’s Law is Dead, you’ll have people in real life asking him about the Geforce RTX 4000 series (which hasn’t even been acknowledged by Nvidia publically) and yet, when Tom asked the person if they’ve any interest in the Radeon RX 7000 series, the person in question didn’t even know what he was referring to.

And for those that haven’t been paying attention to the rumor mill, there’s a very real chance that Radeon RX 7000 series is going to basically do a clean sweep against Nvidia since AMD is going to be using chiplets for Navi 31 and 32 while Nvidia will still be using monolithic.

…and if Nvidia doesn’t figure out chiplets by the time Radeon RX 8000 launches, I think we can safely say they’ll be in big trouble since, much like server CPUs, GPUs also easily scale with “moar cores!”, and we know how much of an advantage that is providing AMD in the server market vs Intel.

DISCLAIMER: it’s possible that AMD wanting to solve their GPU mindshare issue isn’t actually obvious and I simply cannot remember what my stance was before Tom of Moore’s Law is Dead basically put out a video stating that solving their GPU mindshare issue is one of the next big goals beginning with the Radeon RX 7000 series and they’re currently laying the groundwork on the software side of things with the likes of the recently-released FSR 2.0, improved h.264 hardware encoding quality, and AMD Noise Suppression as a counter to existing Nvidia software technologies that people like to point out as an advantage in favor of Nvidia, the latter two being particularly a thing with the game-streaming crowd.

And for those that are wondering, CUDA and/or GPU-accelerated Physx has been basically dead in the gaming world for a while now, so CUDA’s entrenched-ness isn’t really a concern in that market. Also my personal theory is that AMD is looking to use CPU-based accelerator on future CPUs to counter-act the likes of CUDA in the long-term (see also: their Xilinx acquisition), especially since Intel is also going that route.

BTW, for those interested in a bit more color on the topic, apparently there was an extended discussion on this very topic (I haven’t heard it but will queue it up) on a recent Broken Silicon podcast. Here are some excerpts highlighted by Tom from XMG (who has been refreshingly transparent in communications w/ their community, linked because transcripts): https://www.reddit.com/r/XMG_gg/comments/wc0fhh/comment/iijebtu/

1 Like

I’ll just tack this on here since it’s related, but I recently stumbled upon this extended discussion w/ Robert Hallock, long-time Technical Marketing guy at AMD (who refreshingly, really knows his stuff), where he talked at length about Ryzen 6000 mobile: AMD tell KitGuru why they beat Intel's Hybrid Approach 💻 - YouTube

A tidbits that caught my interest:

  • At about 13:00, a discussion on some of the partner/co-design process and timeline required for laptops (1y+, this is similar to some of the things Frank Azor has talked about in the past) and the market release cadence.
  • A mention of AMD’s new Platform Management Framework (PMF), which is AMD’s version of Intel’s DPTF. A bit more info here: AMD Developing "PMF" Linux Driver For Better Desktop/Laptop User Experience - Phoronix
  • 20:40 - interesting, that despite the assumption that LPDDR is faster due to higher MT rates, Hallock stated that D5 (Wide I/O slotted SODIMMs) is better for certain use cases like gaming due to lower latency (note, this would be for the CPU I presume, not GPU, where the extra bandwidth matters more: AMD Radeon 680M iGPU: DDR5-4800 vs LPDDR5-6400 Gaming Performance Comparison - YouTube)
  • 93% of the market buys a notebook that is 18mm or less (ultrathin/thin and light)
2 Likes

Dear all,
thanks contributing to have a vivid discussion here.
Meanwhile I compared amongst my friends and interviewed them plus I compared with my old equipment and my expectations.
Results:

  1. An external GPU connected to whatever interface is definitely no(!) option for a mobile device.
  2. All additional non-external GPUs are outmatching CPUs with internal/‘embedded’ GPUs such as e.g. Intel Iris Xe especially in regards to rendering, video processing etc. tasks.
  3. I don’t care (now) about the CPU manufacturer. Intel or AMD I don’t care but I need the option to order with an additional dedicated GPU.
    That’s it.
    So, as I’m old and wise ( :slight_smile: )I will sit back and wait for this. If not available within the next 2 or 3 years I will by a non Framework product (even if this will ‘hurt’ me), but, in my point of view, sustainability doesn’t and should not mean to be happy with what you can get at the moment but that currently(!) available products can be replaced with a better approach.
    Good Night, and Good Luck
    Gerhard

Dear all,
I bow my head. Now they did it. Even if at the moment the GPU expansion card is only foreseen for the 16 Inch model, all my requirements are met.
My next notebook, for sure, will be a framework DIY model.
And… if the GPU expansion card is working for the 13 Inch model I will sing Halleluja.
Now I will start saving money to buy a framework laptop by the end of 2023.
I’m not an early adopter anymore so I will wait for the feedback…
regards
Gerhard from Germany

3 Likes