Seat of the pants, agreed.
Buuuut I will risk it as long as I use PayPal or my Credit Card that I can have some kind of security. NEVER use any Debit card for any purchase you are questionable about, but we both know this
Wow, finally found this group here only now…turns out I am learning a lot in these chats, for example I only recently learned in another group about oculink.
My question is just to summarise where to go from here?
so wait until FW16 comes out with adapter, then buy One Dock 1.5 in August, is that it?
or is thunderbolt 5 too close around the corner, does anyone have an opinion (I guess we will see).
is oculink actually the best cable (excuse my ignorance on possibilities)? Or what would the “holy grail of hot plug pci” look like?
Basically I am picking people’s brains at this point. Thanks for any opinions. Having the same questions as Tony_White it appears.
OCUlink connection is far more compatible, stable and faster than USB4/Thunderbolt 4.
All you need is a spare NVMe port (Not M.2 SATA) to connect the NVMe to OCUlink adapter to your OCUlink cabled eGPU PCB kit. That and the PSU to power it.
OneDrive has stated that their current version does support FULL Desktop ATX PSU’s to provide maximum power for the higher end GPU’s. I am trying to get confirmation from their Discord on it now.
Handtalker is the man to chat with…
Furthermore, OCUlink is 63 GT/s vs USB4/Thunderbolt4’s 40 GT/s. I’m not sure about what TB5 will offer but you better have a system that supports it. TB is after all a proprietary Intel/Apple thing where as OCUlink is open source.
Also, 8 lane Oculink connectors exist now, which could take advantage of all the lanes available in the expansion bay. Amphenol makes 3 different variants.
One Dock is using the M2 SFF-8611 which I believe is 8-lane and they have given the PCB files of the M.2 to OCulink adapter. So you can make your own adapter board from your local PCB manufacturers if you wish. The openness of OneDock along with that of the Framework laptops is sweet double barrel goodness right there.
PCIe bifurcation would be useful. 2 x x4 and maybe even 8 x x1. I’m not sure how much this requires hardware i.e switches or retimers. I hope most effort would be on UEFI side of things.
PCIe Bifurcation is a feature of whatever offers these PCIe lanes. In this case this would be the CPU. And those in the past have offered limited bifurcation, because it requires them to keep multiple PCIe root port controllers in reserve just in case the user wants to use bifurcation.
For Intel all of this is publicly available. In the past, the CPU PCIe ports have not been bifurcatable to less than 4 lanes. With 11th gen Intel switched to only bifurcating the x16 port down to x8, the x4 port is not bifurcatable at all. And for the 13th gen H and P CPUs the x8 port is also not bifurcatable, same as their 2 x4 ports.
For AMD I do not believe AMD publishes good enough specs for me to read this from official sources. But since the mobile CPUs have removed a lot of functionality to get to a single-Die solution, I would not expect the big port to be bifurcatable to less than x4, if at all.
Anything else would require a separate PCIe bridge to do the splitting. If somebody wants to build a crazy expansion they can just bring their own PCIe bridge for whatever they like.