Thanks for the information, @Gmanny! I feel like an idiot for saying that you can’t run displays off multiple GPUs inside a computer simultaneously when I literally worked at Microsoft for 15 years, and my dev box was running 3 displays connected to 3 different PCIe GPUs that each had one DVI-I connector. I believe they were Matrox Mystique cards if I’m not mistaken because they gave the smoothest desktop rendering experience.
I’ll just chalk that up to having a senior moment since I’ll be turning 45 this month!
I do remember back in the day dragging Windows between the screens, and they would not transition smoothly when you had two in landscape and one in portrait in the center, making an “H” configuration, which is what I ran with for most of my career as a developer. Only a single GPU in the box will display text modes when booting as the primary, but once you’re in Windows or another operating system that supports drivers, it can boot up the other cards and send data to them, so of course they could output that to the screens.
I think you have some good assumptions on how the Framework is compositing the two GPUs together. If you open the Task Manager in Windows, you can even add the “GPU Engine” column under Processes, and you can see which GPU each application is utilizing. The Framework will happily display all of them on the main display as if it’s just one big happy GPU. So, I think you’re probably right that when things are in windowed mode, whichever GPU is the primary is rendering the desktop, and any contents of a window that is displaying “hardware accelerated” graphics from the dGPU is just being streamed into the frame buffer as if it was the monitor, and it’s getting muxed together by Windows itself. You can even drag it between screens connected to different GPUs, and the contents of the window will continue to be rendered by the actual GPU it started with, supporting this theory.
But, as you said, when the application goes full screen, it can directly connect the GPU that is doing the rendering to the display to maximize performance, so it no longer has to go through the iGPU at all. This also explains why the dGPU USB-C port on the back of the Framework 16 can’t display anything on VR that is running on the dGPU, only things that run on the iGPU. I bet the reason for this has to do with how VR uses two different displays showing 2 different images at the same time instead of a single display like a monitor, which would work with the muxing, whereas the VR doesn’t work with it, so it has to be physically connected to the GPU that is displaying the full-screen graphics.
I’d love to see a Framework engineer jump in here and correct us so we could see how close we are to understanding how the MUX chip works and how it only seems to impact VR when it comes to which USB-C port you plug into on the Framework but doesn’t have any problem with standard displays rendering using both GPUs on whatever is plugged into it. I should try plugging my VR into my main PC and forcing a VR app to the iGPU to see if it has the same problem. Because if it does, then that would confirm that this isn’t even a Framework muxing-specific issue at all and it’s just Windows being Windows. ![]()
Thanks,
Jerry (aka. Barnacules)