Possibility of an external GPU through the pci x8 expansion slot

@Be_Far the cable is not the culprit of the latency, at least not the main cause. If it was Oculink would show the same results. AFAIK it occurs because of the bottleneck the reduced bandwidth of Thunderbolt produces at some particular moments.

For now we can use Oculink, but Thunderbolt 5 (and USB 5?) are rumored to come sometime in 2024, with a speed of 80Gbps bidirectional and also a new mode where one line can switch direction, giving up to 120 Gbps in said direction. That should be way more than enough bandwidth for an eGPU (current Oculink 2.0 reaches 63Gbps).

That will make Thunderbolt (and USB?) really good options for egpu. Until then people should be aware of the limitations.

What is then? I mentioned bottlenecks and how the videos don’t really give enough hard data on what could cause degraded performance. To be clear, latency in this context would mean the time between the CPU issuing a draw instruction to the pixels showing up on screen, to get closer to the issue of what thunderbolt can actually affect (your total hardware latency is a sum including that number, but your link to your gpu can’t affect your mouse’s connection to your cpu etc). The factors involved are the RAM (fetching the instruction), pipelining (how fast the instruction gets from RAM to CPU), CPU (how fast the instruction is executed, how fast something is dispatched to the GPU, how fast the CPU can encode the thunderbolt protocol), pipelining again (CPU to thunderbolt port), the thunderbolt cable (port to controller), the thunderbolt controller (how fast it can decode), and the graphics card (how fast it can execute a decoded instruction).

Not necessarily. Oculink uses optical cables (the O in OCu), which are an order of magnitude faster than traditional thunderbolt cables. Optical thunderbolt exists, but it’s more for when cable latency plus signal degradation gets absolutely unusable at anything beyond about 3m cable length. The digital-optical transcoding would be a bottleneck at shorter distances because the protocol isn’t designed for it (but Oculink is, allowing the transcoding not to be a bottleneck and thus decreasing cable latency and increasing comparative performance for optical at lower distances).

Oculink was initially developed with the idea of using fiber, but it rarely does. Most if not all the Oculink connections you see are electrical, not optical.

So yes, if the problem was mainly because of the cable, Oculink would suffer the same.

If you read my comment I already said:

Another example. In this video after the 8 minute mark, they show the video of a game stuttering because of Thunderbolt 4/USB 4.

I’ve been pondering this quite a bit, and I recently came up with a new personal use case where I do need reasonably powerful NVidia HW on the laptop - something like 4050 with 6G VRAM would be ok. (Use case is that I got myself a 3D scanner, it uses CUDA…)

So, one thing I’d like to drive is having OCuLink 2 in all upcoming Framework GPU cards, that would be great improvement in this eGPU concept. Of course TB5 would be nice as well, but it kind of sounds like next version CPU board thing to me.

I think this is a great idea, and I would pay for this in a heart-beat. I hear many mixed reviews for eGPUs but Framwork have the option to make something different here.