GPU expension bay: physical ON/OFF switch?

Hello there !
I’m just wondering if the GPU can be physically disabled to behave like the default Expansion Bay Shell without the need of replacing/exchanging those. The idea would be to be able to “activate/power on” (hotplug or when powered off it doesn’t matter) the graphic card only when needed without:

  1. using extra juice when we do not need it and just rely on the iGPU.
  2. avoid unecessary swap that could damage the bay connector ( that is designed for ~50 connexions).

As you can realise, it would just behave like the recent swich added to some expansion cards.

If such feature is not present:

  • do you think such modification could be a feasible/possible mod by the community ?
  • could such improvement be included in the future graphical bays ?

The gpu is connected via PCIe. While that is theoretically hot-pluggable, to my knowledge very few devices actually support it, because it requires significant work in the drivers/software to ensure a graceful shutdown/bootup of the devices. Very few consumers usually need to hot plug their gpus anyway. Maybe it’s bigger in the server space.
That being said, i think modern OS and drivers are pretty good at turning the dGPU off when it’s not needed. Or you can tell it what to do in most cases.
I’m not an electrical engineer or anything, but i guess very theoretically one could try to interrupt the powerlines in the expansion bay connector with a switch and still try their luck at hot-plugging the gpu, but it’s all so small, i doubt the “community” can realistically achive that. Maybe one or two individuals who have the special equipment

1 Like

It’s possible for the BIOS to handle this, by forcing iGPU or dGPU only modes but we don’t know yet if they’ve done this. Additionally, there are tricks on linux that enable you to turn off/remove the dGPU, but they’re definitely not as convenient as a physical switch. Ubuntu/Canonical based distributions have the advantage of being able to use the nvidia-prime-applet which enables easy toggling of the GPU… but I run Arch.

This hassle is also why I didn’t order the GPU with my FW 16, I also don’t need it because I have a desktop.

1 Like

If you make sure to update the AMD drivers you can go to the AMD Radeon control-panel and enable switchable graphics. This allows you to choose what apps use the IGPU and the dGPU.

You can also enable options such as powersaving dGPU mode or high performance dGPU mode.

This way you can use the IGPU until lets say you fire up a game. I hope this helps. And here is the official link.


Thank you all for your feedback.
Well, I’m aware about possible software control but they do not seem to be relyable (yet?) or we don’t know yet if Framework’s Bios will have such feature. Even if the dGPU can be activated on demand, it’s still like devices being in standby on a power strip, it still consume energy while doing nothing and my wish is to be able to always maximise the battery life when I don’t need that extra power.
If anyone from Framework could drop few words about it it would be really appreciated.

From what I know of the AMD control center is that it can fully idle memory clocks and contribute measurably nothing to the power usage of system.

The power consumed by the idle dGPU could be mere minutes of battery life. That seems worth the tradeoff to me. And If you a concerned about it you can always swap out the bay for the empty one.

In my experience with nvidia optimus the difference between an “idle” gpu and a disabled GPU is about 6 hours of batter life.

I’ve spent the last 3 weeks learning how to optimize and personalize my current laptop (Razer Blade 15 w/ intel+nvidia) with this same goal in mind. On windows, the dGPU would never properly idle and so my battery life was around 1.5-2 hours. After switching to Arch Linux and taking over control of everything, my battery life has gone up to 6-9.5hrs depending on what I’m doing. So regardless of what BIOS features Framework implements, it is possible if you’re willing to ditch Windows.
It’s supposed to be easier and “just work” in windows, but there always seems to be some random background task keeping the GPU on so in practice Optimus doesn’t work.

I would assume it’s a similar experience for AMD GPU’s, but I obviously can’t confirm this.

That’s the main problem of such technology, it’s not relyable and you have either to invest massive time to fine tune or try X different distro to get the right OS where it is supposed to work ‘out of the box’.
I also have an experience with an old optimus laptop and I never got a great battery life. It was always really hot despite not doing fancy work nor gaming…
That’s why for me a physical switch is really relevant !
You power off the laptop, switch off the GPU button and voilà ! excellent battery life on any OS and call it a good day !

As far as I know there’s no physical switch present, but if you look at the pinout of the Expansion Bay connector, more specifically:
The Power Interface connector, pins 62 and 63:

Pin No. Signal Name I/O Type Power Domain Impedance Voltage Current Notes Function Column Row
62 GPIO0_EC OD CMOS ALW N/A 3.3V Connect to EC GPIO PU for CTF to power down the dGPU I/O pin for 2nd battery 1 PG
63 GPIO1_EC OD CMOS ALW N/A 3.3V Connect to EC GPIO on mainboard, control 2nd battery discharge and DP_HPD from dGPU PD controller 1 PH

There does seem to be an option to power down the dGPU in the interface through pin 62.

Pin 63

I don´t think pin 63 is actually applicable for this question. If I understand correctly, it is used to control charge to the mainboard for either a 2nd battery (in the expansion bay) or an external power supply connected through the expansion bay.
Included it since it also mentions the dGPU.

I assume/expect pin 62 is connected in the GPU expansion bay, but don’t know exactly how it is implemented. Given it connects to the internal Embedded Controller (EC), which was already open sourced for the FW13, I would assume this can be controlled by software and also by the user.

Several assumptions in this post, but given what FW all thought off in the FW16 with regards to modularity, it would surprise me, if they did not implement this functionality (or they won’t in the future).

1 Like

Did things like enabling advanced Optimus to dynamically switch between integrated graphics and discrete graphics not allow the discrete graphics to properly idle?

1 Like

Correct. The issue I’ve had with Optimus (on Windows) is that if some background process decides it wants the dGPU, there’s nothing you can do about it unless you want to sink tons of time into troubleshooting it. Nvidia does have a system-tray icon that shows what apps are running on the dGPU, but in my experience it doesn’t capture everything and the dGPU will stay on anyways. I’ve since read that the Microsoft phone app is one of these and people recommend manually forcing it to the iGPU… but I don’t use that app and never wanted it to begin with. At one point I had Optimus running well, behaving as expected, and then one day it just stopped behaving, the dGPU turned on and my battery life went from 6hrs to 2hrs.

Personally I got sick of all the Microsoft schenanigans and decided to gut it out of my PC entirely and switched to Arch. Now I know every task running on my device and the battery lasts up to 50% longer (~9hrs) than it did with the “behaved” Windows.


Hi RandomRanger

Was there anything you did specifically to gain those extra hours? was it just switching to linux, or did you deep dive into battery settings using tlp and the likes?

Yeah, here’s what I’ve done:

  • Most importantly (but likely not applicable for you) was to disable my dGPU. After trying many solutions, I settled on EnvyControl
  • Setup auto-cpufreq so that it automatically switches to the powersave scheduler and disables turboboost
  • Capped my CPU freq to less than the base clock speed (either 1 or 1.5Ghz). This isn’t for everyone, but I like it
  • Use powertop to optimize various parameters for battery
  • Run a low screen brightness

Other things I tried

  • I compared a couple desktop environments to terminal only and found that the desktop rendering does not draw a measurable amount of additional power.
  • I experimented with airplane mode and found that idle wifi power draw did not measurably affect my power draw
  • I experimented with powering off cpu cores, this appeared to result in a very slight power draw increase

I was able to reduce my power usage from 15-25W on Windows to 6.8-12W on Arch Linux, which is where my power savings come from. I have a power meter on my desktop that constantly shows my current draw, and so I can very easily identify when I’m drawing more power than I want. E.g. I was working on writing my book yesterday, which only needs ~7W of power to do, but because I switched from a local editor to Google Docs it was drawing 11W to do this task and that had me annoyed.

Through experimentation I’ve learned the power breakdown of my Razer Blade 15 is

  • CPU: 3W + x depending on load
  • dGPU: 0W because I gutted that sucker. It will draw 15-35W if it’s on though.
  • Display: 3W + x depending on brightess (0% brightness draws 3W)
  • All other components combined: ~1W under my usual load