Laptop APUs/iGPUs vs Desktop APUs/iGPUs

Hello! Just a random tech question, but I was wondering how laptop iGPUs and APUs seem to be faster compared to desktop apus/igpus? Especially for intel with it’s Iris series. For instance why is there no Iris Xe iGPU for a desktop processor? Is it because the mobile chip size is larger? Sorry if the question has an obvious answer!

As far as I know, the GPU part of Intel’s chips takes up quite a bit of space. So it’s relatively difficult/expensive to produce. In desktops, most systems that will be used for gaming or other graphics intensive work will be fitted with a separate graphics cards anyways (chip shortages notwithstanding), so there’s not much point for Intel to put the big boy graphics in their desktop CPUs.

They could, but the incentive is too small. I guess it’s just a market segment they’re not interested (enough) in. Especially with how rough the manufacturing situation at Intel has supposedly been the last few years, and with the huge demand for chips, it’s not worth it.

AMD does bring high-performance APUs to desktop, but those are in reality just laptop CPUs packaged in a desktop socket. Nothing wrong with that, but you can see that it’s also not a super big priority for AMD, since the desktop APUs typically launch a lot later than the laptop version of the same chip.


I’d say @21jaaj has pretty much got it in one.

Despite what gamers might think, the vast, vast majority of laptops are not bought for gaming. They need good enough graphics for casual gaming and the odd bit of photo/video editing, but realistically not dedicated GPUs which just ramp up cost and physical size.

The majority of desktops nowadays are bought for a purpose - usually a need for high power, which a laptop can’t really provide unless it’s basically desk bound anyway. The only other reason for desktops is offices that haven’t yet realised that hybrid working is a thing…and nobody needs much of a GPU for basic office work! Most general purpose computer users use laptops because why be constrained to a single location?

That means desktop users are more likely to want dedicated graphics, so why bother putting really good graphics in the CPU when most of the time they won’t get any use? It’s just going to be a waste of money, wafers and development time.