I will say this. Mario knows what they are talking about. He is an expert on these topics.
I wanted to make sure everyone here is aware that Mario is speaking from a place of deep experience. Please engage in healthy conversation if all parties can continue to keep it friendly, but I want to be crystal clear on this.
I had no doubt that Mario was correct, well, except for this part:
I’m not an expert on how Linux handles graphics cards, but I also think that adding support for automatic MUX switching to Linux would be complicated, so probably we won’t see it anytime soon. That’s why I asked about manual MUX switching in the BIOS. In my opinion that’s the only way for Linux users to benefit from having a MUX. From my point of view, it’s a pity to have a MUX chip and all DisplayPort pins in the expansion bay and not being able to use them.
Wouldn’t connecting a monitor to the USB-C port on the dGPU also prevent the dGPU from going into the lowest power state?
This whole thing kinda bums me out, I didn’t even know that Linux won’t use the mux. I was pretty stoked since this was going to be my first machine with one.
Framework Laptop 16 is an unusual case. Linux can utilize MUX because MUX itself is OS-independent, but it cannot utilize NVIDIA Advanced Optimus or AMD SmartAccess Graphics because they are available only for Windows. So the problem is not MUX itself but how you control the MUX.
When laptops with switchable graphics cards started appearing, having MUX was the only way to use the dGPU on Linux because MUX controls which graphics card is connected to the monitor, and MUX toggle was usually in the BIOS, making it OS-independent. In contrast, MUXless “switching” aka hybrid mode, works by the dGPU not being directly connected to the monitor but sending the resulting data to the monitor through the iGPU, which required OS-level support. So for a long time, it didn’t work properly on Linux. I had one MUXless laptop, and GPU switching on Linux worked so terribly at that time that since then, I’ve only bought iGPU-only laptops. But from what I’ve heard, it is working quite well these days.
As Mario mentioned, sending data to the monitor through the iGPU adds latency, and I think it also prevents Nvidia G-Sync and AMD FreeSync from working. So a couple years ago, Nvidia introduced NVIDIA Advanced Optimus, and AMD recently responded with SmartAccess Graphics. The idea is that instead of sending data to the monitor through the iGPU, the GPU driver itself switches the MUX. The problem is that this currently only works on Windows, and for some to me unknown reason, Framework has decided to rely solely on this solution and not offer an alternative option to control MUX in the BIOS.
So having MUX is an advantage for a Linux laptop unless it is the same case as Framework Laptop 16.
So unless I switch the mux to the dGPU in the bios, which the 16 may not have an option for, the display signal will be going through the iGPU thus defeating the point of having a mux? Which to my mind means the dGPU would still be worthwhile but the mux would be doing nothing, or am I still off?
You’re right. It’ll basically work like a MUXless laptop. So you can tell the application to use dGPU by using DRI_PRIME=1. For example, like this DRI_PRIME=1 cheese. This works for most applications although there are exceptions. For example Steam can’t be launched this way.
Luckily I never needed a gpu passthrough. Proton and Bottles have managed to run everything I’ve needed so far. I thought the gpu passthrough didn’t need the MUX but it’s mentioned in some guides. For example: [GUIDE] Optimus laptop dGPU passthrough · GitHub
I’m wondering, suppose someone creates another dGPU module based on Nvidia or Intel. How would the MUX switching be handled then? Would AMD software work with a GPU from another manufacturer?
Sorry to probably keep asking the same question as everyone else, but under Linux, will we be able to use the external Display Port USB c to drive a monitor for games? Will it have the same latency as in Windows (I’m wondering from your comment about the refresh rate and latency for dGPU vs iGPU)? I’m not familiar with what a MUX is, but the concept of the path needing to go from dGPU to the iGPU, then to the monitor makes sense that it would add latency.
Does the DRI_PRIME flag address this for specific apps (by telling the compositor to use the dGPU or something)?
Again, apologies for my ignorance. I’m just curious about being able to use that eDP for monitors and gaming in Linux.
EDIT: I’m sorry, I took ‘eDP’ to mean External Display Port (like the usb c on the back of the dGPU). Apparently this is something else altogether?
Judging by the pin-out in the expansion bay I suspect the gpu may actually be the one actuating the mux so it might be down to the nvidia driver then which is gonna be even more fun on linux XD.
Then again I suspect if someone were to make nvidia modules it would be unsanctioned by nvidia and they’d probably not bother hooking up the mux in the first place.
The one on the back is only connected to the dgpu, no mux or anything fancy there.
A mux generally referes to something switching a signal between multiple sources/targets but in this case it referes to switching the edp input to the internal display between igpu and dgpu. It does save some pcie bandwidth and a little bit of latency. Back when igpus sucked more it also allowed for stuff like adabtive sync and stuff but the igpus can do that too now.
DRI_PRIME just makes the application use the dgpu for rendering, it’s still displayed on the igpu so there is bandwidth used to transfer the rendered image to the igpu but the heavy lifting is done by the dgpu.
Using edp for monitors is a whole other conversation and I am not sure that is what you are actually asking edp is meant for internal displays only, you got regular dp for monitors.
Maybe a silly question, but I have the FW16 w/ dGPU and it just works in a way I wouldn’t expect.
I have NOT added DRI_PRIME=1 to any game configs, nor anywhere else
When I load up a game it’s automatically using the dGPU (verified via the dGPU used RAM jumping up [1] and the fact that the iGPU can’t push 1600p Ultra graphics @ 60fps for Outer Worlds)
Running the AppImage dGPU checker shows nothing is using the dGPU.
Obviously this isn’t an issue (and in fact works perfectly), but it doesn’t line up with any of the docs I saw or this thread, which both imply that you need to pass the DRI_PRIME=1 for it to offload to the dGPU. Am I missing something? (Also let me know if this isn’t the right spot to ask).
Just to set expectations to everyone: even if we agree on an upstream friendly design and roll out all the pieces, it may require some BIOS changes from Framework to work.
We (AMD) will share details with Framework for any BIOS changes if and when it gets to that point.
As a guess, perhaps the game is natively detecting the second GPU and using it directly?
I use the nvidia cuda platform for various AI related workloads as well as gaming, but this level of active support from amd towards the framework (and other business practices) has be questioning the value of that relationship.
If I’m not mistaken Outer World doesn’t have a native Linux version so I think in this case DXVK or some other part of Proton automatically chooses dGPU.
I’m wondering if what my 2011-era 17" HP laptop does with its discrete Radeon HD 5770 (not to be confused with the RX 5700) could be a valid option, that is it has a BIOS option that lets you toggle between iGPU and dGPU simply based on whether the charger is connected, i.e. when on battery it uses the iGPU and when the charger is connected it uses the dGPU.
(I’m a bit late to respond since, for whatever reason, Discourse, the forum software, as of the last few months now insists that my browser of choice, Pale Moon, is no longer supported and therefore doesn’t let me post, so I had to re-login using my “plan B” browser, LibreWolf)
For a Vulkan application, DRI_PRIME=1 only moves the selected device (dGPU) up the list of devices which the application can choose from. This is different from OpenGL where DRI_PRIME in fact replaces the device shown to the application.
(see: Environment Variables — The Mesa 3D Graphics Library latest documentation)
Usually a Vulkan application (as is dxvk) screens the list of devices for certain capabilities and then chooses the one it perceives as most fitting. I don’t know about what criteria dxvk uses for its choice, but I suppose it (correctly) considers the dGPU better for the intended purpose. A native Vulkan app might even expose the device list via their configuration options.
btw the links above also mentions this:
For Vulkan it’s possible to append !, in which case only the selected GPU will be exposed to the application (eg: DRI_PRIME=1!).