Framework Laptop 16 with egpu (Thunderbolt 4 & OCulink)

I really like the idea of how the framework can let us customize what we need, I prefer getting a laptop so I can focus on work outside, and potentially connect it to an egpu and have some fun at home.

As questions of compatibility, since only the Laptop 16 comes with 2 x M.2 slots, that’d be the only option I can use an OCulink adapter and storage at the same time. Yet I am uncertain if AMD works well with Thunderbolt 4 / OCulink. Has anyone ever tried it?

1 Like

Egpu wise amd has better USB4 than intel (at least until tb5 egpus show up). You can get close to the full bandwidth for pcie tunneling.

As far as other people have reported occulink works just fine on the 16, no hotplug though of course.

4 Likes

I personally haven’t used an eGPU with my FW16 but I know there’s plenty of people who have used eGPUs with success. If you’re interested in OCulink more and want more storage you can add two more M.2 drives to the FW16.

2 Likes

I used an USB4 eGPU and Oculink in my Framwork 16.

I Bought the dGPU Model and tried fully using it with an adt-link UT3G which is the only USB4 Adapter besides the UT4G which uses the Full Potential of the USB4 40 Connection as any Thunderbolt eGPU only uses up to 22gbps due to protocol constraints. But it can be used as an Thunderbolt eGPU Adapter on Intel Computers as its the Fallback Mode.

I Used my RX 6800xt on my Framework and am no on the RX9070XT.
As the Bandwith got a bigger constraint on the RX9070XT i migrated to an DIY M.2 SSD Expansion Bay with Oculink Port and the Jump in Smoothness is noticable.

Please don’t buy Thunderbolt eGPU Enclosures if you want to pair it with an AMD USB4 Device, The Performancehit and the Overpriced Cases are not worth it.

The only Downside of the UT3G and UT4G is that they don’t have Powerdelivery and can’t charge the Device, so they need the Charger plugged into the Laptop, but you also need a Charger when using Oculink.

Oculink eGPU PRO: Native PCIE Connection / Full 64gbps Bandwith / Best Current Performance
Oculink eGPU CON: No hotplugging / DIY Solution inside the Framework

USB4 eGPU PRO: Hotplugging / 40gbps Bandwith
USB4 eGPU CON: only 40gbps and a little bit more CPU Usage due to Protocoll Conversion (more stutter)

Thunderbolt eGPU PRO: Powerdelivery to Device / USB Peripherals/Extensions on the eGPU
Thunderbolt eGPU CON: Worst Bandwith/ Most Stutter

For my Current Setup i switch the dGPU in when i need the GPU on the Go, if not i keep the Expansion Bay.

Here a Picture of my SSD Oculink Expansion Bay:


Here a Picture of my RX9070XT as an USB 4 eGPU

Here a Picture of my RX9070XT as an Oculink eGPU with the Minisforum DEG1 Adapter:

Here are my Builds with Benchmarks on eGPU.io for the RX6800XT and the RX9070XT.
The Oculink Build is not yet published on there.

2 Likes

Aw man they made a successor to the ut3g and didn’t ad power delivery.

1 Like

This is incredibly helpful—thank you so much!

Apologies for my limited understanding, but just to clarify: OCulink isn’t hot-swappable, right? When you mention “no hotplugging,” does that mean I’ll need to reboot my PC each time I connect or disconnect an OCulink device? I’m totally fine with that if it means getting a more native and stable connection.

Also, may I ask which model of OCulink adapter you’re using and where you purchased it? That would really help me get started.

And for the USB4 option—just to confirm, is this the correct adapter to go with? :backhand_index_pointing_right: Framework | USB-C Expansion Card

Thanks again for taking the time to share your insights. I truly appreciate it!

Yes for Oculink you need to reboot your PC every time you want to unplug/plug it. Oculink is native pcie forged into a Plug.

My Oculink Adapter i use in my Dual SSD Carrier is a Generic Aliexpress M.2 to Oculink Extension with Wires instead of a flat Cable it should be with better Signal Integrity. The Pciex16 to Oculink Adapter i use is the Minisforum DEG1 from Amazon.

And yes the Expansion Card is the standard Framework USB C Card in the respective Port of the Fw16 the left and right nearest to the Vent.

1 Like

In that case occulink is the way to go. Well mostly the more native bit stability wise both are fine except thunderbolt handling unexpected unplugs and stuff like that more gracefully but ideally you’ll avoid having those.

Check out this thread: OcuLink eGPU works with the Dual M.2 expansion bay module

A lot of folks in there with various statements on it. I personally run a OCuLink 4i from the dual m.2 adapter and haven’t had any issues with my current configuration.

Framework 16 (7840HS, 32GB DDR4)
Minisforum DEG1 (Radeon RX 6800XT, Silverstone SFX PSU
Fedora 42 KDE

I had been using an Aoostar AG02 instead of the DEG1 but I kept having minor issues. The DEG1 has been more stable, and came with a longer cable (cable compatibility has also been a pain).



4 Likes

I currently am working on a pcb that will replace the m.2 adapter and have two 4x oculink with configurable bifercation (two 4x, or 1 8x link over the two ports). this should improve performance even more :3

3 Likes

No. No protocol limitation. Not at all.
This is strictly about the chipsets used.

Original Alpine Ridge TB3 controllers had that internal bandwidth limitation. They were just slow when it came to PCIe.
Since Titan Ridge TB3, this problem is gone. Titan Ridge can reach the full bandwidth of its x4 Gen 3 PCIe connection (i.e. its held back by the connection to the GPU). Same for Goshen Ridge and Maple Ridge TB4 chipsets (which are just USB4).

The only reason the UT3G and UT4G are faster is, because they use the ASM2464 controller, which has a PCIe x4 Gen 4 port, so it can exceed the former 32 Gbit/s limit with any host that can also exceed that (which is anything newer than Intel 11th gen & Maple Ridge).

Intel has since also started selling new controllers (Barlow Ridge) that also have x4 Gen 4 ports and come also in a 80G variant. Barlow Ridge are also USB4v2, which would allow them to leave a protocol efficiency limitation of TB3 & USB4v1 behind, that has made PCIe through them less efficient than native connections (what causes you to only see ~ 25 Gbit/s of usable bandwidth of an actual 32 Gbit/s PCIe connection).
This limitation still exists in all current CPU-integrated AMD & Intel USB4 controllers and the ASM2464 as they are all USB4v1 though (they just do the same with a ~ 38 Gbit/s PCIe connection, only leaving ~30.5 Gbit/s of usable bandwidth), so far you can only benefit from this efficiency increase with a TB5 host / USB4 80G host, which so far outside of Apple only exists as external controllers, which are only used in giant and power hungry notebooks not in anything portable.

That the ASM2464-based stuff does not suppport supplying power and has no other USB hub functionality or a downstream port to chain another hub is a limitation of this specific controller. It was designed for USB4 NVMe enclosures, which are mostly bus powered. Thus it integrated the power negotiation stuff into the chip and has no other ports for other functionality.
There is supposedly a variant of that chip that could be used with external power negotiation, that could then also supply power from an enclosure again, but so far, nobody has implemented that (so who knows it its actually possible as advertised etc.). With the Intel controllers, power negotiation was done by a separate chip, as it is in all our Frameworks. So the manufacturer has easy control over this part independent of what the TB/USB4 controllers do. And all Intel controllers with PCIe ports always have USB3, DP and TB/USB4 ports for downstream as well.