Judging by the pin-out in the expansion bay I suspect the gpu may actually be the one actuating the mux so it might be down to the nvidia driver then which is gonna be even more fun on linux XD.
Then again I suspect if someone were to make nvidia modules it would be unsanctioned by nvidia and they’d probably not bother hooking up the mux in the first place.
The one on the back is only connected to the dgpu, no mux or anything fancy there.
A mux generally referes to something switching a signal between multiple sources/targets but in this case it referes to switching the edp input to the internal display between igpu and dgpu. It does save some pcie bandwidth and a little bit of latency. Back when igpus sucked more it also allowed for stuff like adabtive sync and stuff but the igpus can do that too now.
DRI_PRIME just makes the application use the dgpu for rendering, it’s still displayed on the igpu so there is bandwidth used to transfer the rendered image to the igpu but the heavy lifting is done by the dgpu.
Using edp for monitors is a whole other conversation and I am not sure that is what you are actually asking edp is meant for internal displays only, you got regular dp for monitors.
Maybe a silly question, but I have the FW16 w/ dGPU and it just works in a way I wouldn’t expect.
I have NOT added DRI_PRIME=1 to any game configs, nor anywhere else
When I load up a game it’s automatically using the dGPU (verified via the dGPU used RAM jumping up [1] and the fact that the iGPU can’t push 1600p Ultra graphics @ 60fps for Outer Worlds)
Running the AppImage dGPU checker shows nothing is using the dGPU.
Obviously this isn’t an issue (and in fact works perfectly), but it doesn’t line up with any of the docs I saw or this thread, which both imply that you need to pass the DRI_PRIME=1 for it to offload to the dGPU. Am I missing something? (Also let me know if this isn’t the right spot to ask).
Just to set expectations to everyone: even if we agree on an upstream friendly design and roll out all the pieces, it may require some BIOS changes from Framework to work.
We (AMD) will share details with Framework for any BIOS changes if and when it gets to that point.
As a guess, perhaps the game is natively detecting the second GPU and using it directly?
I use the nvidia cuda platform for various AI related workloads as well as gaming, but this level of active support from amd towards the framework (and other business practices) has be questioning the value of that relationship.
If I’m not mistaken Outer World doesn’t have a native Linux version so I think in this case DXVK or some other part of Proton automatically chooses dGPU.
I’m wondering if what my 2011-era 17" HP laptop does with its discrete Radeon HD 5770 (not to be confused with the RX 5700) could be a valid option, that is it has a BIOS option that lets you toggle between iGPU and dGPU simply based on whether the charger is connected, i.e. when on battery it uses the iGPU and when the charger is connected it uses the dGPU.
(I’m a bit late to respond since, for whatever reason, Discourse, the forum software, as of the last few months now insists that my browser of choice, Pale Moon, is no longer supported and therefore doesn’t let me post, so I had to re-login using my “plan B” browser, LibreWolf)
For a Vulkan application, DRI_PRIME=1 only moves the selected device (dGPU) up the list of devices which the application can choose from. This is different from OpenGL where DRI_PRIME in fact replaces the device shown to the application.
(see: Environment Variables — The Mesa 3D Graphics Library latest documentation)
Usually a Vulkan application (as is dxvk) screens the list of devices for certain capabilities and then chooses the one it perceives as most fitting. I don’t know about what criteria dxvk uses for its choice, but I suppose it (correctly) considers the dGPU better for the intended purpose. A native Vulkan app might even expose the device list via their configuration options.
btw the links above also mentions this:
For Vulkan it’s possible to append !, in which case only the selected GPU will be exposed to the application (eg: DRI_PRIME=1!).
If the stuff I mentioned above happens it will be the compositor’s role to toggle the mux switch. Software would communicate intent via a wayland protocol. So to accomplish a function key based toggle you would need an application listened for that keycode and then communicated with the compositor through that wayland protocol to request such a change.
Even though, at least on my Thinkpad T420 (I don’t have access to a Framework at this time), the Fn-key based hotkey function to increase or decrease screen brightness works even in the BIOS boot menu of all things?
Yes; the reason for all of this complexity is that there is a handoff sequence that you need to do between GPUs. If you don’t do the handoff sequence properly you’re going to end up with a very confused software stack, inconsistent brightness, and possibly phantom displays.
This is my guess as well. Some games like Marvel’s Guardians of the Galaxy do this. The game has it’s own setup tool that usually selects the dGPU out of the gate.