yeah, lets hope framework does offer somehow an oculink 8i in future.
Sure, community-wise would also be nice, but as long as you can’t buy something somewhere (doesn’t matter if it comes from framework or community), it will not be usable for most people out there, who can’t do it by themself or “hope” for someone, who have 3 or 4 units “left”.
So at the end, someone “has” to do it, if it should be available to the majority of the users.
Continuing in this thread for development of a simple Oculink adapter card (might be dual Oculink 4i, not 8i, since I can’t find the pinout for the 8i)
There’s plenty space left on the board both for a battery (though not too big) and a charge controller on the left - routing would be pretty easy. If anyone of you has experience with that, feel free to add it, even if someone wouldn’t need it and wouldn’t populate it, it is a no-brainer to have IMO. Doesn’t add cost to PCB manufacturing after all.
Speaking of cost, if you want to order yourself, on jlcpcb it costs 34€ for a budget thin version and 49€ for the full size card + shipping (~10€ for me) for 5 PCBs, and it scales really nicely (e.g. 63€ for 20 full size cards). Connector costs are a bit higher at 10-12€ + shipping if you buy individually. Plus charge controller costs.
I expect some people to build and resell the boards, so don’t worry if you can’t reflow solder, and there’s also the 8i board by Josh which you might be able to buy
Also don’t forget to account for the costs of a dual 4i to 8i cable (~55€) and a 8i to x16 PCIe Slot board (~55€) - pretty expensive. If you go for an equivalent 4i setup, that’d be much cheaper at 40€ with combos made for the GPD win series. Win for us!
Can’t look at this thread without smiling a bit, looking forward to the oculink port! So, I have been looking for oculink EGPU options and saw this comment on egpu.io:
I am looking for an EGPU that definitely provides the full speed of oculink obviously and can support high-end desktop cards. Any advice on which one to get and which one would be best compatible with what is being built in this thread?
This leaves me for now with the plan to buy OSMETA and hopefully an oculink adapter for FW16 in late Q4, but hoping someone will maybe design a dock in this community as well by around then.
Josh, who also did the 8i design, expressed he planned to make an 8i dock way back. I believe he will be able to sell a bundle for a decently good price. Only downside, it’s not open source (yet?), so no extras possible unless he does it. For the dock, that’s fine ofc, doubt I’d want anything besides a standard 8i dock implementation that takes standard power supply input to support the 75W PCIe slot power
For the people that wants the battery on the module, think again, it was popular back on the day because it was the only way, but given the current state of affairs, if you just want battery expansions, one is better off getting PD powebanks that can not only charge your laptop but also everything else. And are easy to substitute, on the other hand a custom battery attatched to your module can be very challenging to deal with in the future when it eventually ages as all current batteries do and its time to replace, so you’d have a battery that can only serve your laptop and very much harder to find replacements of
Turns out the info in this post is outdated, please rather read a more up-to-date correction in this reply.
I saw people discussing here Thunderbolt vs. OcuLink. As a former Thunderbolt eGPU enthusiast, I recommend researching the “real” measured host-to-device bandwidth on Thunderbolt eGPUs. (CUDA-Z has that benchmark built-in if you want to check your setup.) The marketing may tell you it’s “up to 40 Gbit/s”, or that it’s “4 PCIe 3.0” lanes, but when you measure the real bandwidth, somehow it’s between 2.1 and 2.5 GiB/s, which is so much lower than what you would expect. This is due to a multitude of factors, from how a Thunderbolt controller is attached to the CPU, to the fact that Thunderbolt apparently has pretty bad encoding overhead with the marketed bandwidth being before all that.
Kinda stops being fun when the real bandwidth you get on Thunderbolt 3 is equivalent to a single PCIe 4.0 lane.
Since OcuLink is a more or less passive transmission of the PCIe protocol, it promises greater bandwidth than that. Even the asymmetrical “up to 120 Gbit/s” of the next generation of Thunderbolt doesn’t really look appealing when you apply the same potential halving of the real bandwidth.
Do not want to hijack the thread or start a flamewar, but I think, it’s an important piece of information on Thunderbolt tech - overhead is quite massive on it when you’re talking about PCIe tunneling. If you have ideas where to better locate this, I’d appreciate it.
With older controllers that is the case, however with newer controllers higher speeds can be achieved.
Thunderbolt doesn’t carry PCIe lanes, it carries PCIe data. When something says “4 PCIe 3.0 lanes” it means that is how the controller is treating it, however higher capabilities can be achieved.
The ASMedia ASM2464PD (a USB4v1 40 Gbps controller) can achieve 3.6 GiB/s (~31 Gbps) real world.
The Intel JHL9440 (a USB4v2 40 Gbps controller) can take advantage of lower overhead. Afaik there have not been any real world test results shared from that but I expect it to be around ~4 GiB/s (~34 Gbps) real world.
USB4v2 80 Gbps (aka Thunderbolt 5) should allow around double that at ~8 GiB/s (~69 Gbps) after overhead, although Intel’s initial controllers will probably be limited to ~6 GiB/s (~52 Gbps) due to limiting it to four PCIe 4.0 lanes.
Part of that is due to USB4v1 only allowing 128 byte payloads in PCIe tunneling. Each payload has a 22, 26, or 30 byte header. That means anywhere from 15-19% of the bandwidth is used just for headers.
Whereas with 256 byte payloads (which are supported by USB4v2 and Thunderbolt 5) only 8-10% of the bandwidth is used for headers.
That helps a bit.
A single PCIe 4.0 lane has a theoretical speed of 1.86 GiB/s before any overhead. In real world tests it comes in around 1.4 GiB/s (12 Gbps) after overhead.
Oh wow, thank you for a nice clarification! Yeah, my data is definitely 2-3 years old at this point. I got a desktop more or less as soon as I learned about all that. I will educate myself further and will look into new enclosure options, thank you very much!
Yep, data from 2 years ago would’ve led to that conclusion. That was right at a time when controller chip innovations were focused on docking stations and there hadn’t been an innovation for controllers suitable for eGPUs in several years at that point.
A little over a year ago the ASMedia ASM2464PD hit the market and pulled ahead of the previous best eGPU controller (Intel JHL7440) by 1 GiB/s (3.6 GiB/s vs 2.6 GiB/s).
There are also several USB4v2 controllers right around the corner.
Intel has announced the JHL9440 and JHL9480 controllers. The JHL9440 is still a 40 Gbps controller but it features support for the new overhead optimizations from USB4v2 meaning it should be ~10% faster at ~4 GiB/s. The JHL9480 is an 80 Gbps controller, but is limited to PCIe 4.0 x4 meaning around ~6 GiB/s max.
Future USB4v2 80 Gbps controllers without that PCIe 4.0 x4 limitation should be capable of ~8 GiB/s. ASMedia’s ASM2892 might reach that, although they haven’t announced much about that.
Of course the ASM2464PD reflects the fastest that current laptops can keep up with (the controllers in the laptop limit the capabilities as well). The other controllers will need newer laptops to handle their performance.
Currently the market is in a non-optimal spot where only one eGPU enclosure (ADT-Link UT3G) uses the ASM2464PD controller I mentioned and it is a pretty basic enclosure. I think other brands of eGPU enclosures are waiting for the 80 Gbps controllers that are just around the corner.
Yup, that’s also was what I found. Will get by on OcuLink, and once the good stuff arrives, migrate to 80 Gbit/s stuff.
I’ve read that that 80 Gbit/s USB4v2 has the optional mode where they repurpose one 40 Gbit/s channel from TX to RX or vice versa so that you can get an asymmetric 120 Gbit/s host-to-device and 40 Gbit/s device-to-host. This seems super beneficial for eGPU scenario. Of course, would have to have the host controller be wired appropriately with 8 lanes of PCIe to the USB controller (if laptop Ryzen 7040 even uses a separate controller instead of implementing USB4 on the SoC), so an upgraded motherboard would also be required… Still, I’m getting real excited again, thank you!
Cause one of the downsides of OcuLink we won’t get away from for some time is the absence of hot plug support