Hey there,
I’m thinking about buying a framework 16, but I’m unsure whether it will be possible on Linux to switch rendering from the internal GPU to the external one without reboot.
Also, will I need to power off the system when I want to swap between the dGPU and the empty shell (e.g. I was gaming at home, but want to take the laptop to university now where I don’t need the dGPU)
When the dGPU is plugged in but not used, how will this affect battery life?
This is not an intended use case because the internal connector of the bay area is only rated for 50 swaps. You will likely hit that very soon if you switch daily.
As for the power drain of the dGPU thats dependent of the implementation. This can only answered after the first units are reviewed and tested.
Edit by Framework team:
There continues to be a lot of incorrect information echoing around the Expansion Bay connector.
We’re developing our own semi-custom connector with the supplier specifically to make it better for end-user handling. The datasheet that people are referencing that states 50 cycles is for an off the shelf connector that Framework Laptop 16 doesn’t ship with.
It is rated for 50, it could do more (and probably will) but officially it’s rated for 50.
Edit by Framework team:
There continues to be a lot of incorrect information echoing around the Expansion Bay connector.
We’re developing our own semi-custom connector with the supplier specifically to make it better for end-user handling. The datasheet that people are referencing that states 50 cycles is for an off the shelf connector that Framework Laptop 16 doesn’t ship with.
It is not officially rated for any number of cycles, this info is only off the public datasheet from the manufacturer but the part for the FW16 is custom.
Fair enough, still though the point that it is not intended for constant swapping stands, not that that would really make a lot of sense in the first place.
Edit by Framework team:
There continues to be a lot of incorrect information echoing around the Expansion Bay connector.
We’re developing our own semi-custom connector with the supplier specifically to make it better for end-user handling. The datasheet that people are referencing that states 50 cycles is for an off the shelf connector that Framework Laptop 16 doesn’t ship with.
I have 2 notebook which has both an igpu and a dgpu running linux (one intel/amd, and one amd/amd), the dgpu will power off if:
no app using it
no monitori connected to it’s direct ports
not on AC power (Likely to do with PCIe power management features)
To change between using the 2, all I have to do is set an environment variable called DRI_PRIME to 0 (use igpu, default if not set) or 1 (use dgpu).
You set this on a per-application basis, e.g. for your games.
So most things use the igpu and only a few apps where I want the extra performance (which is basically only games or 3D heavy apps)
It’s basically a non-issue for me and works great.
That is for the “prime” use case, as in display is connected to the igpu, prime selectively offloads rendering to the dgpu. On the 16 the dgpu at the very least can be connected to the internal display (not sur if that is mandatory if the expansion bay is installed or can be controlled somehow) so it would not worm quite the same.
On my current notebook (the amd/amd one) my external monitor is connected to the dGPU, and if I render something on internal gpu (DRI_PRIME=0) it still displays correctly on external monitor.
As I said, the moment that the dGPU is not needed and power management allows it to power down, it does.
I can say that I needed to add some funky XRANDR config to get it to work on X11, but using Wayland it “just works”. (Also Wayland has lower latency and more consistent frame delivery)
That case is very likely doing reverse prime (rendered on the igpu, displayed on the dgpu), at least wayland does it like that by default, not quite sure how x handles it.
You can check with “glxinfo | grep renderer” what gpu is doing the actual rendering, with DRI_PRIME=0 it’s probably the igpu, even on the external display on the dgpu. With DIR_PRIME=1 it’ll be the dgpu but basically doing prime twice which is a bit sub-obtimal performance wise (basically doing 2 trips over the interface) but definitely makes it smoother when unplugging/disabling the dgpu.
But for the case of the 16 it might be more complicated because of the mux, I don’t think they went quite into detail on how that will work.
Good point, didn’t think about that.
As I understand the mux in the FW16 is only for driving the internal display from the dGPU?
In any case both “prime” and “reverse prime” works at the same time on my current system.
But it has no mux at all.
I’m not sure how much value a mux really provides as it adds complexity and the cost of the iGPU reading the framebuffer from the dGPU’s memory or the dGPU reading its own framebuffer to send to the display is very much the same from the dGPU’s PoV, it’s just the slower transfer over PCIe that affects it. And transferring a 4K image at 60fps over a PCIe3 x8 link on my current notebook is still leaving the PCIe interface mostly idle.
I suppose it makes a larger difference at higher refresh rates? And affects latency?
I still think its a marginal benefit, and I won’t bother chasing it if it’s not on, as long as things “just work”
A direct connection is definitely better in a lot of ways but prime has gotten pretty close and does add a few quite convenient quality of life features.
With all the confusion and correction about the number of swaps of the expansion bay, I did want to clarify that it is still a PCIE connection, and the laptop does need to be shut down when you remove the GPU module.
Huh I wonder actually, theoretically PCIe supports hot plugging, normally in datacenters, but also noticeably with Thunderbolt connections!
It might be possible to make this support hot-plugging, maybe with a software eject button.
This will be a wait and see situation as supporting an eGPU with a system that has detectable graphics just isn’t a testing focus at the moment. That said, I have used my eGPU with the 13 and I can say it’s best to at least log out and log back in to get to a working state.