Hello, I am looking to replacement my gaming PC with a laptop + eGPU. And of course quickly discovered that OCuLink is the way to go for this.
I was going to buy a normal consumer grade professional/business laptop, and then drill a hole in the bottom for access to an installed OCuLink adapter. But discovering this, seems like a cleaner option.
Plus being able to upgrade components down the line seems good.
But my question is this. For this 3D printed mod, is there a particular Framework 16 build/specification that is needed/ideal? How many I/O ports does this end up “taking up” from the build overall? Which side of the case is this expansion slot going on?
Appreciate any help on this, helps me build some context for moving forward.
When it comes to the framework 16 you can go with whatever suits your needs. But I recommend you get the MINISFORUM DEG1 eGPU Dock since in the thread there was discussion of other docks not being compatible with the adapter I designed the 3d print for.
If I understood correctly the question. The Dual M.2 Expansion bay module occupies the dGPU slot on the rear. It does leave 1 M.2 slot open for expansion. The 6 IO ports on the sides remain untouched.
Also be mindful when ordering from the amazon link in my post as it has been recorded to send what ever adapters they have in some shifty storage. Your mileage may vary.
And ah I gotcha, your 3D printed expansion module replaces the “expansion shell” along the back of the unit. The place where you have the option to add a GPU module from Framework. Makes sense.
So really I just need to make sure my Framework 16 is built with the “blank shell” on the back, and only use one of the M.2 Expansion bays for NVME storage, and keep the other one free for this mod. And of course all the IO ports are untouched as you mention, so I do whatever I want there.
Very clear, I appreciate it!
This is a very tempting option in comparison to drilling a hole in the bottom of a consumer grade laptop for sure!
Just note that as you wish to replace a “Gaming PC” that Framework does not have gaming in it’s DNA as a company. There are some sunken costs in going down the Framework 16 route - upgrades seem to take quite some time to come (now even longer with the tariff situation), so you may be “stuck” with a mid-range, slightly outdated AMD CPU for quite some time. One indication of this is that Framework advertised mid-range components as “overkill” for a long time on their website…
Well i bought a m.2 NVME to 'Oculink Cable and an Minisforum DEG1, but i can’t get my System to register my GPU, wahtsoever.
The GPU Turns its Fans, but i don’t have the GPU in the Device Manager. The M.2 NVME Drive in the other Slot is recognized.
I Will remove it for Test Purpose in the next Step, as its only a pcie 3.0 Drive
edit: M.2 Adapter wasn’t fully seated, now its working, but with the pcie 3.0 drive removed, as its pretty limited in space… And yes the m.2 adapter isnt screwed in, as the framework post doesn’t fit it, its held in with double sided Tape.
I finally received the 3D prints and got to setting everything up. The 3D printing lab had to order a new base plate for their printer as I asked for a very smooth finish, which took some time ;). Installation was a bit more painful than I thought and it really just so barely fit in, but it works flawlessly and looks clean I think. To be honest, a dream come true, I finally have a notebook with a clean OCuLink port.
So today I tried to get two of the ribbon cable adapters in (not for 8 lanes GPU usage which is not possible with the dual SSD adapter, but a secondary 4 lanes use case I wanted to try).
Turns out, it does not work, and I encourage you not to try, as I nearly destroyed one of my adapters in my second attempt.
Both attempts had the M.2 Adapter PCB cut to the second slot and screwed in (not fully) with a normal screw to increase the vertical space available for the bends. Can also recommend to do that though screwing the M.2 Adapter in fully seems to put too much force on the M.2 port so I didn’t tighten it fully.
First attempt:
Upper slot routed normally to the right port as in other images in this thread.
Lower slot routed below and then above the vertical bend of the first cable.
Problem: Upper-left bend of the second cable interfered with the metal outcrop, pressing flat against it and pushing the left port out of the case.
Second attempt:
Bring the upper-left bend lower to not interfere by reducing overall cable length.
Achieved by introducing wiggles in the bottom flat area where there is still a lot of vertical space to work with. Also kept the upper adapter cable above it this time, which was a mistake (even with tape holding it down) as it led to the upper adapter interfering with the casing and being slightly damaged while pulling out (no way to prevent that).
Number of lanes is not really ever the problem anyway if you match PCIe generations between CPU and GPU.
It’s only in situations where you have low lane count on the CPU side and then a lower PCIe gen supported on the (modern) GPU side that it becomes a problem. I.e. you get 4x PCIe 4.0 on CPU side, but the GPU is 16x PCIe 3.0 → then you get 4x PCIe 3.0 which may be limiting (but probably is fine).
that’s helpful thanks. I got curious about the potentially higher bit from the blog below.
I tried the above setup with oculink and 5090, flight simulator 2024 struggles quite a bit. Ironically not because of the CPU (50-60% utilisation) but with dGPU bandwidth, the 4 lanes over PCIE 4.
The Ryzen 7x40 Series only exposes 20 pcie 4.0 Lanes. 8 Lanes are for the 2 internal NVME Slots 8 Lanes for dgfx, 2 Lanes for Wifi/Lan and another 2 for other pcie/Sata Extensions
No! You have 6 Pins per Lane (for TX/RX/2xGND/2xSignal Integrety) makes 48 Pins/ The Rest is for all Power Lanes/USB/Fans etc. You would need 122 Pins to have a Full 16 pcie Lanes + the Power/Rest
I see, makes sense that some games are bandwidth-heavy on a top of the line GPU. Hope someone comes up with a solution to utilize all 8 lanes so that you get this resolved!
i just saw you posted the exact same Picture just before me haha.
I use my RX9070XT with my Oculink Setup and i barely loose 5-15% overall Performance depending the Usage scenario.
Techpowerup has a great Article for the RTX 5090 and pcie Generation Scaling. Average FPS for 4K is 147FPS for pcie x16 5.0 and 138 for x4 4.0 so its negligable, BUT for the Case you don’t use an external Screen on the eGPU (Powering your internal Screen or an VR Headset via Wifi) you dramatically increase the Bandwith Bottleneck.
I tested alot with my RX6800xt USB4 Setup before and my GPU was just shy of 50% utilized.
I didn’t test VR since i upgraded to the 9070XT and Oculink.
Its a shame laptops still use PCIe. If they used infiniband instead, there would only be 4 Pins per lane as the clock is embedded in the signal, and it would also be very easy to send it down fibre optic cables as infiniband is compatible with QSFP transceivers.