Direct connection of Display output and dGPU (MUX inclusion)

As everyone here knows, The Framework 16 has an iGPU,
and an optional dGPU in the expansion bay.
Will the FW16 dGPU be routed to the display via the iGPU, like most laptops,
or will it have a multiplexer switch (MUX), in order to directly connect the dGPU to the display output? The exclusion of an MUX, like most of the laptops, would mean that the dGPU will be mostly useless.
Note: Dynamic Display Switching is a new tech, that performs the function of the traditional MUX, but hot-swapped i.e. no need to restart. This would be better.

As you all know, the laptop allows to connect to external displays using the ports and expansion cards. This is assuming that a multiplexer switch is present.
Will there be a multiplexer for selecting the external display output separately from the internal display, or will all of them be dependent on a single multiplexer?
Using iGPU for internal and dGPU for external monitors would actually be useful.

Here are some diagrams of how they work.

Here’s why the multiplexer is important:


I agree with the need for a MUX switch. I am also hoping that there will be at least one output port directly connected to the dGPU and that dGPU passthrough will be possible with the planned FW16.

Unfortunately these decisions were probably taken months ago and if it as not been planned for then it is probably already too late for the current hardware.


The expansion bay connector has displayport back channels (explicitly meant for the internal display), so there very likely is a mux. Would be nice to get an official confirmation on that though.


I think “mostly useless” is a bit of a stretch lol, but I imagine there would be dedicated direct DP on the rear of the expansion module which is just as good.


Yes, but the switching would still require a multiplexer switch (MUX),
now between the two eDP connections, instead of the two GPUs.
This could allow us to switch the MUX without restarting the laptop, like Dynamic Display Switching (or could be the same thing, but idk).

Then the multiplexer will mostly be present for switching between the eDP connections, not GPUs, unlike traditional multiplexer switches, where the GPUs are switched.

Well yeah it switches the edp connection between gpus, don’t really see what you mean there.

Since neither modern Intel nor AMD CPUs have DP-ins anymore (Intel had this in the 45W 11th Gen CPUs), there is no way to route a dGPUs DP outputs back into the CPU, where the native TB/USB4 controllers live.
So any native USB4 port, can only come from the iGPU. External TB controllers could be done, Dell does this with the XPS 17 to make the external outputs muxable. But these external controllers (2 of them) will increase power consumption and reduce USB4-PCIe performance.

The expansion bay also only has a single DP backchannel as per the published documentation, so I would be pretty sure that none of the expansion card USB-C ports will have any access to dGPU outputs, even if some of them were non-USB4.
But there is barely any usecase for switching those ports anyway. Not if you can just expose both iGPU (via expansion card ports) and dGPU display outputs (via dedicated ports on the back, directly from the GPU board, seems to be designed to allow for that).

That is an exaggeration. dGPU is mostly about processing power. Having to use the iGPU for display output does only a few things:

  • Limits you to the physical output capabilities of the iGPU. → No longer a problem, since Intel learned Adaptive Sync, HBR3 and DSC with Xe. And 13th gen even supports DP at UHBR10 and 20 speeds, while Nvidia for example is still stuck at HBR3 speeds. Only HDMI FRL support is missing from Intel’s iGPUs. But that won’t work fully with USB-C expansion cards anyway and is not used for integrated displays. Also, AMD’s iGPUs can supposedly do that.
  • Adds latency of the PCIe transfer of the frames over to the iGPU. → Yes, it costs performance. But less and less with modern PCIe bandwidths. If your dGPU is significantly faster than the iGPU it will still be worth the upgrade. Bottlenecks due to PCIe bandwidth limits should be pretty much over with PCIe Gen 5 x8 connections to the dGPU.
  • Complicates games as the GPU doing VSync / Adaptive Sync or whatever is not the one being used for rendering and more driver stuffs happens at this time. → Most stuff works. Bugs need to be mostly fixed by the software. The more devices are using hybrid graphics, the less of a problem it will be. No developer should expect to only see and support a single GPU in the system anymore.

What else would there be to switch than the display output connections if you share the integrated display or output ports between both GPUs? Yes, the really old switchable graphics setups physically cut power from the dGPU when switched over, with all the problems that caused. But this was still done via software from within Windows. With the right driver, both GPUs were accessible simultaneously, just as with modern hybrid graphics.
But with modern power saving techniques this basically does not matter anymore. If the dGPU is not driving any displays and is designed to be energy efficient, it can just sleep deep enough, that it basically no longer consumes any power. It does not need to be “unplugged”. The iGPU does not really consume anything, if it is not doing calculations or output to a display, no reason to ever physically turn it off, except to workaround very broken programs that get very confused by its pure existence and without proper controls.


This means that G-Sync/FreeSync and other dGPU based tech will be unusable with external monitors, when using expansion cards.

This could solve the problem, and will likely be better, as it is direct, but for external monitors only.

1 Like

Traditionally, the GPUs were switched, with a single eDP. This required a PC restart.
Switching the eDP allows to change the output without a restart.

Here is some info

1 Like

AMD iGPUs support “FreeSync” just as the dGPUs do. Also, FreeSync over DP was always just marketing for Adaptive Sync, which Intel supports from 11th gen on, as I mentioned.
Modern G-Sync (with the G-Sync Ultimate Modules) also switched over to using Adaptive Sync. So my 38" Alienware AW3821 works perfectly fine on Adaptive Sync on my 12th iGPU. It just seems Intel is limiting the minimum refresh rate to 20Hz instead of going down to the monitors actual minimum of 1Hz. Everything else works fine. Anything G-Sync Compatible over DP was also always pure marketing for Adaptive Sync.

Nah. It was always a Mux switching between the 2 outputs. It is just the question how you control that. Is there Windows software to switch it at runtime or is there only a BIOS switch, with no option to control it from the OS. Whether or not this BIOS switch also disables the other GPU is down to the BIOS and completely optional. Nowadays it would be bad style to disable the iGPU, just because no monitors are connected to it. It could still be valuable for decoding and encoding video, just like it is done in desktop PCs. If the dGPU is not crap at dynamically power saving, there is no reason to deactivate that from BIOS (if all outputs are already swapped over), because the OS can just not use it. But this might have been a more important reason in the past. If the dGPU permanently uses up 5W for existing, there is value in deactivating its entire PCIe slot / power.

Advanced Optimus just integrates the decision when to switch with the drivers to make switching at runtime a seamless experience instead of being handled like unplugging the monitor from one GPU and plugging it back in at another, which would take a few seconds. But it only covers the integrated display.
We already established that Framework is designing for the possibility of a Mux for the integrated display, so lets see what level of support they manage to build in for that. My point was just you hardly notice the difference. And the reason for the existence of Advanced Optimus was more that Intel before 11th gen did neither support the full speed of DP nor Adaptive Sync, so you had to use the dGPU for that. You no longer do.

Biggest reason to want dedicated outputs of the dGPU nowaddays (with a powerful iGPU) is HDMI VRR and using more than the 4 monitors each GPU supports. Or having alternatives if one GPU is buggy with specific monitors (my U3223QE for example had broken HDR on Nvidia GPUs, but not on Intel iGPUs). MST-based docking is also sth. that Nvidia does not officially support and guarantee, whereas Intel does.


Having a MUX would be nice, but I think I’d actually agree with Ray mostly for the 16.

The main benefit would mostly be to some input latency/frametiming for when you’re using the internal display. Otherwise, maybe some battery savings if you disable the dgpu entirely.
I’ve yet to see anyone review a laptop with advanced Optimus/dynamic display switching that didn’t note it wasn’t also extremely buggy yet. One day they might fully nail that down, but I’d rather a BIOS toggle for the user to decide, or none at all in the meantime.

The MUX is absolutely helpful if you want to use OBS without having an external monitor, otherwise you have to tweak a bajillion settings to get a screen recording

1 Like

Here, you get the other problem? Compatibility.
i.e. with Linux, and more with old software. Both struggle with using the dGPU properly through an iGPU, without and problems.

Whatever the case, the exclusion of a multiplexer is a dealbreaker for many.
And even if misconceptions exist about this, they will be enough to get customers away from this.

Wait there isn’t?

I wonder what the dp lines from the expansion bay are connected to then.

Not yet properly known. Just describing a possible scenario.

Could be for the multiplexer, or a separate unannounced DP/HDMI or similar port on the expansion module itself.

Can I know some details?

The pin-out explicitly says “For the internal display”

There may additionally be outputs on the module but there is a 4-lane edp link “for the internal display” in the connector.

1 Like

So, a multiplexer is most likely present.

Well in Windows you have to tweak a lot of settings in OBS, and the graphics settings, and the Windows settings because it wants to use the internal graphics as much as possible and OBS doesn’t capture when using the internal graphics. It may be simpler in Linux