[RESPONDED] MUX switch toggle in BIOS?

Luckily I never needed a gpu passthrough. Proton and Bottles have managed to run everything I’ve needed so far. I thought the gpu passthrough didn’t need the MUX but it’s mentioned in some guides. For example: [GUIDE] Optimus laptop dGPU passthrough · GitHub

I’m wondering, suppose someone creates another dGPU module based on Nvidia or Intel. How would the MUX switching be handled then? Would AMD software work with a GPU from another manufacturer?

Sorry to probably keep asking the same question as everyone else, but under Linux, will we be able to use the external Display Port USB c to drive a monitor for games? Will it have the same latency as in Windows (I’m wondering from your comment about the refresh rate and latency for dGPU vs iGPU)? I’m not familiar with what a MUX is, but the concept of the path needing to go from dGPU to the iGPU, then to the monitor makes sense that it would add latency.

Does the DRI_PRIME flag address this for specific apps (by telling the compositor to use the dGPU or something)?

Again, apologies for my ignorance. I’m just curious about being able to use that eDP for monitors and gaming in Linux.

EDIT: I’m sorry, I took ‘eDP’ to mean External Display Port (like the usb c on the back of the dGPU). Apparently this is something else altogether?

Judging by the pin-out in the expansion bay I suspect the gpu may actually be the one actuating the mux so it might be down to the nvidia driver then which is gonna be even more fun on linux XD.

Then again I suspect if someone were to make nvidia modules it would be unsanctioned by nvidia and they’d probably not bother hooking up the mux in the first place.

The one on the back is only connected to the dgpu, no mux or anything fancy there.

A mux generally referes to something switching a signal between multiple sources/targets but in this case it referes to switching the edp input to the internal display between igpu and dgpu. It does save some pcie bandwidth and a little bit of latency. Back when igpus sucked more it also allowed for stuff like adabtive sync and stuff but the igpus can do that too now.

DRI_PRIME just makes the application use the dgpu for rendering, it’s still displayed on the igpu so there is bandwidth used to transfer the rendered image to the igpu but the heavy lifting is done by the dgpu.

Using edp for monitors is a whole other conversation and I am not sure that is what you are actually asking edp is meant for internal displays only, you got regular dp for monitors.

Maybe a silly question, but I have the FW16 w/ dGPU and it just works in a way I wouldn’t expect.

  1. I have NOT added DRI_PRIME=1 to any game configs, nor anywhere else
  2. When I load up a game it’s automatically using the dGPU (verified via the dGPU used RAM jumping up [1] and the fact that the iGPU can’t push 1600p Ultra graphics @ 60fps for Outer Worlds)
  3. Running the AppImage dGPU checker shows nothing is using the dGPU.

Obviously this isn’t an issue (and in fact works perfectly), but it doesn’t line up with any of the docs I saw or this thread, which both imply that you need to pass the DRI_PRIME=1 for it to offload to the dGPU. Am I missing something? (Also let me know if this isn’t the right spot to ask).

[1]

4 Likes

FYI, we’re adding this topic into the Display Next Hackfest agenda for discussion:

Topics & Talks · melissawen/2024linuxdisplayhackfest Wiki (github.com)

Just to set expectations to everyone: even if we agree on an upstream friendly design and roll out all the pieces, it may require some BIOS changes from Framework to work.
We (AMD) will share details with Framework for any BIOS changes if and when it gets to that point.

As a guess, perhaps the game is natively detecting the second GPU and using it directly?

8 Likes

I use the nvidia cuda platform for various AI related workloads as well as gaming, but this level of active support from amd towards the framework (and other business practices) has be questioning the value of that relationship.

1 Like

That’s great news.

If I’m not mistaken Outer World doesn’t have a native Linux version so I think in this case DXVK or some other part of Proton automatically chooses dGPU.

I’m wondering if what my 2011-era 17" HP laptop does with its discrete Radeon HD 5770 (not to be confused with the RX 5700) could be a valid option, that is it has a BIOS option that lets you toggle between iGPU and dGPU simply based on whether the charger is connected, i.e. when on battery it uses the iGPU and when the charger is connected it uses the dGPU.

(I’m a bit late to respond since, for whatever reason, Discourse, the forum software, as of the last few months now insists that my browser of choice, Pale Moon, is no longer supported and therefore doesn’t let me post, so I had to re-login using my “plan B” browser, LibreWolf)

For a Vulkan application, DRI_PRIME=1 only moves the selected device (dGPU) up the list of devices which the application can choose from. This is different from OpenGL where DRI_PRIME in fact replaces the device shown to the application.
(see: Environment Variables — The Mesa 3D Graphics Library latest documentation)

Usually a Vulkan application (as is dxvk) screens the list of devices for certain capabilities and then chooses the one it perceives as most fitting. I don’t know about what criteria dxvk uses for its choice, but I suppose it (correctly) considers the dGPU better for the intended purpose. A native Vulkan app might even expose the device list via their configuration options.

btw the links above also mentions this:

For Vulkan it’s possible to append !, in which case only the selected GPU will be exposed to the application (eg: DRI_PRIME=1!).

3 Likes

If the stuff I mentioned above happens it will be the compositor’s role to toggle the mux switch. Software would communicate intent via a wayland protocol. So to accomplish a function key based toggle you would need an application listened for that keycode and then communicated with the compositor through that wayland protocol to request such a change.

1 Like

Even though, at least on my Thinkpad T420 (I don’t have access to a Framework at this time), the Fn-key based hotkey function to increase or decrease screen brightness works even in the BIOS boot menu of all things?

Yes; the reason for all of this complexity is that there is a handoff sequence that you need to do between GPUs. If you don’t do the handoff sequence properly you’re going to end up with a very confused software stack, inconsistent brightness, and possibly phantom displays.

2 Likes

This is my guess as well. Some games like Marvel’s Guardians of the Galaxy do this. The game has it’s own setup tool that usually selects the dGPU out of the gate.

Ahh, this all makes sense. For what it’s worth, the few games I’ve played so far have all automatically went to the dGPU which is just awesome :rocket:

I think I just misread

to ensure you are using the discreete GPU and not the integated GPU for your game.

to mean “required for dGPU support” instead of “helps when it’s not automatic”. But that’s on me because “ensure” is a clear word.

Thanks for all the help!

(yes I put a PR up to fix the typos in the quote haha)

1 Like

Ack and thank you! Merged. :slight_smile:

So, having a dGPU on Linux is kind of pointless for now?

I don’t see how you arrived at that conclusion. Can you elaborate?

1 Like

NVM I misunderstood :sob:, still kind of a pain though, that SmartAccess Graphics doesn’t work on Linux.

Nope, I use it all the time. Gaming especially.

2 Likes