dGPU Cooling Without The Interposer

Hello all. I’m curious if the cooling will still work on the dGPU module with the interposer removed. Is there a separate ribbon cable for cooling?

I would want to do this so I could get the longest battery life possible when away from outlets.

If I’m not mistaken, the dGPU still uses some power even when powered down through hybrid graphics.

Of course I know it hasn’t been released yet. I’m thinking that since there is a repo containing the high level specs. Maybe someone familiar with it will be able to answer.

Interposer pins 65 through 70 are used for fan power, control and feedback.


Thanks for the reply. I guess that dashes my hopes.

I ordered mine without a dGPU because even though sometimes I may want it, it’s not worth the power draw while “off” because I’m a big fan of even bigger battery life. I’m don’t fully understand how you arrived at your question and am not familiar with the role of the interposer, but I find it interesting. Would you mind elaborating a little bit?

I only ordered the dGPU, I’m never far from power. And I want the GPU for gaming, simulations, and machine learning or AI. I’m considering running Linux on my internal GPU, and when gaming with Windows running in a VM; the dGPU can be dedicated to the VM. On the otherhand, I believe it’s possible to specify programs to run on the GPU in Linux.

I have never done this yet, I just read it was possible. And I’m not sure it works well.

I read that even if you command the GPU to power off, it will still use some power. That got me thinking the dGPU with the power disconnected would just be a the empty shell. And get me power savings.

The interposer is the connector between the motherboard and GPU. It brings power and data in the form of 8 PCI Express lanes. I was hoping the cooling would go through a separate connection. So i wouldn’t need the empty shell.

Even if the fans could still be powered, i imagine it would be kind of a drag to remove the imposer so often, since it isn’t designed to be swapped frequently. Framework have said they improved on the original standard, so it has a longer lifespan, but the pins probably still don’t take nicely to being stressed weekly or possibly daily


Also I’d worry about the longevity of the mechanism that secures the input modules and stuff. Definitely not something to change that often.

Can you even run much AI stuff on an AMD card? I’ve only ever ran models on nvidia via cuda, and as far as I’m aware it was a requirement.

The GPU module fans also cool the CPU heatfins. If you only ordered the GPU module, removing it will leave you with no cooling fans at all.

Place a post-it note under the right-ride interposer.
Left is power and fan control pins, right is PCIe and other high speed data.

It is true that almost all the AI stuff is done through CUDA, but I’m more interest in DIY learning of AI. I want to learn more by creating my own framework using ROCm.

RadeonOpenCompute/ROCm: AMD ROCm™ Platform - GitHub Home

1 Like

I think there has been a misunderstanding. I don’t want to turn off the cooling, I want to remove power to the graphics card. While keeping the cooling going. So in effect I have the empty shell that framework sells.

I also read that. I believe that it’s rated for hundreds of uses, but should last longer than that.

I wouldn’t do this often, and the interposer is rated for hundreds of uses. And should last longer. And the interposer can’t be that expensive, one of the benefits of Framework is that I would have been able to buy a spare.

But it doesn’t matter in the end, I wouldn’t be up to modifying the dGPU module to get this done.

I don’t think I would be willing to go that far in the pursuit of battery life. :smile: