It is not very well known but Oculink connection is basically PCIe over a cable, similar to USB with which it was competing initially. It seems that usb is the future, but until Thunderbolt 5 (USB5?) arrives (with rumored 80/120gb/s speed), the maximum 40gb/s of Thunderbolt 4/USB 4 is not enough to take full advantage of a modern GPU used externally.
Here is where Oculink comes in, as the current version 2 can drive 63gb/s in one direction (4 PCIe lines) and fully take advantage of an external GPU.
So I was hoping someone more knowledgeable would share how hard would it be to add oculink to the Framework 13?
As far as I can see there is the issue of connecting the 4 PCIe lines and the other is where to place the connection physically.
Would it be possible to fix one of the expansion cards, disconnect the 4 lines that usb4 uses and connect them to a fixed Oculink expansion card?
Not possible, these are Thunderbolt lanes and as far as I can see they come from the CPU that way. You can of course get PCIe from Thunderbolt, but you’ll still be limited by the thunderbolt maximum speed.
Are you really sure that going from 40gbps to 63gbps will fix your problem? Even if it were somehow possible to mod the Framework to access PCIe lanes directly, that would be a lot of risk and a lot of work for potential minor speed gain.
Yes, I am sure. You can check several tests done by Youtubers on how the speed limit of Thunderbolt/USB 4 limits the performance of modern GPUs. And how the 60% increase speed of Oculink allows for full speed and barely no difference to the performance of native PCIe.
In fact, one of the main reasons why people are looking towards Thunderbolt 5 is that it will allow full eGPU use. There are numerous posts in eGPU forums about it.
Edit: and not only Oculink is faster, but several Youtubers have found that the compatibility is much better, since it exposes the PCI lines directly without passing by thunderbolt/usb.
I though all the m.2 ports were already in use, by the SSD and by the wifi. Is there a third m.2 port? If that is the case it would make things a lot easier since you can convert a m.2 port directly to Oculink. Then, the only issue would be where to physically expose the Oculink. It could be done occupying one of the usb expansion cards, but ideally someone would come with a more clever solution.
If there is no free m.2 port available then 4 pci lines are needed. Or wait a year or two for Thunderbolt 5 (USB 5?).
I can definitely believe that Thunderbolt limits performance a little bit, but I don’t really see a huge problem that would make such drastic (and infeasible) modding necessary. The reasonable options are to either live with the low performance or to buy a faster laptop or desktop PC.
Yes, however you could just leave out the SSD and/or Wifi and use these slots for your GPU. You could connect the SSD and Wifi (if you don’t use Ethernet) over USB and/or Thunderbolt instead.
But this will obviously not fit into the laptop, so you’d create some sort of Frankenstein desktop PC. It would be much simpler and cheaper to just use a desktop mainboard with proper x16 PCIe slots in the first place.
If you require a mobile laptop with that speed, wait until USB5 or get a proper gaming laptop.
Actually, removing the wifi card to connect the Oculink there and exposing the Oculink in that side of the chasis could be viable. Then you would need some kind of wifi expansion card. Maybe combine them together, as in exposing the Oculink external connector in the expansion card next to the m.2 wifi with a wifi/bluetooth connected to usb. You could even use the internal antenna since the cable is next to it. Not the easiest project but seems viable.
About 60% more, but that is what modern GPUs need.
Would be cool if the broke out like an 8x link onto some very compact internal header but you have to keep in mind the board area you’d use for that is in the primest of prime real-estate areas of the board so I can understand that they are not doing it even if it would be cool af.
@Adrian_Joachim thanks for the answers, is there any link where I can learn more about how all of this works and why it is so impossible to somehow get the PCIe lines from the chipset once the board is done?
@Adrian_Joachim I see what you mean. It is a shame Framework left those pins unconnected. They should think about finding a way of leaving them exposed, it would make the resale value of their boards increase. People would be incline to try more projects with them.
I guess this will all be fixed in a year or two when Thunderbolt/USB 5 is the standard in new computers and there is no need for Oculink.
Shame yes, but also pretty understandable, laptop boards are very cramped, the cpu fan-out on a laptop board is extremely cramped, convincing the board designer to break out a couple dozen very high speed connections just cause may not be that easy XD. It might actually require more pcb layers to actually make it work which would significantly raise the cost of the board (though I’d honestly pay a couple hundred more for a board that has at least 8 lanes broken out to somewhere but it’d be a hard sell for less insane consumers and framework seems to prefere fewer skus to more).
I am pretty sure pcie speeds and requirements will stay ahead of fancy interconnects like tb/ usb4.whatever for the foreseeable future. While the use of occulink for this is relatively new, breaking out raw pcie isn’t and I would love if someone would make an official external pcie connector that’s just hotpluggable pcie, nothing else. Kinda like express-card back in the day.
I understand there has to be a balance between making something tinkerer friendly and the economics of it. I am sure there are good economic reasons for this, it is just a shame.
Yes, of course, but I am mostly interested in eGPU at the moment, so my mind went there. With the 120Gbps that Thunderbolt 5 will be able to do in one direction, that should be able to handle the bandwidth that gpus will need for many years.