I don’t know about oculink stuff, but it would be lovely to see a battery extension which also had some extra io on the back.
Indeed, these threads in this community are the reason why I am waiting, would prefer to buy from someone here
For the people that wants the battery on the module, think again, it was popular back on the day because it was the only way, but given the current state of affairs, if you just want battery expansions, one is better off getting PD powebanks that can not only charge your laptop but also everything else. And are easy to substitute, on the other hand a custom battery attatched to your module can be very challenging to deal with in the future when it eventually ages as all current batteries do and its time to replace, so you’d have a battery that can only serve your laptop and very much harder to find replacements of
Turns out the info in this post is outdated, please rather read a more up-to-date correction in this reply.
I saw people discussing here Thunderbolt vs. OcuLink. As a former Thunderbolt eGPU enthusiast, I recommend researching the “real” measured host-to-device bandwidth on Thunderbolt eGPUs. (CUDA-Z has that benchmark built-in if you want to check your setup.) The marketing may tell you it’s “up to 40 Gbit/s”, or that it’s “4 PCIe 3.0” lanes, but when you measure the real bandwidth, somehow it’s between 2.1 and 2.5 GiB/s, which is so much lower than what you would expect. This is due to a multitude of factors, from how a Thunderbolt controller is attached to the CPU, to the fact that Thunderbolt apparently has pretty bad encoding overhead with the marketed bandwidth being before all that.
Kinda stops being fun when the real bandwidth you get on Thunderbolt 3 is equivalent to a single PCIe 4.0 lane.
Since OcuLink is a more or less passive transmission of the PCIe protocol, it promises greater bandwidth than that. Even the asymmetrical “up to 120 Gbit/s” of the next generation of Thunderbolt doesn’t really look appealing when you apply the same potential halving of the real bandwidth.
Do not want to hijack the thread or start a flamewar, but I think, it’s an important piece of information on Thunderbolt tech - overhead is quite massive on it when you’re talking about PCIe tunneling. If you have ideas where to better locate this, I’d appreciate it.
With older controllers that is the case, however with newer controllers higher speeds can be achieved.
Thunderbolt doesn’t carry PCIe lanes, it carries PCIe data. When something says “4 PCIe 3.0 lanes” it means that is how the controller is treating it, however higher capabilities can be achieved.
The ASMedia ASM2464PD (a USB4v1 40 Gbps controller) can achieve 3.6 GiB/s (~31 Gbps) real world.
The Intel JHL9440 (a USB4v2 40 Gbps controller) can take advantage of lower overhead. Afaik there have not been any real world test results shared from that but I expect it to be around ~4 GiB/s (~34 Gbps) real world.
USB4v2 80 Gbps (aka Thunderbolt 5) should allow around double that at ~8 GiB/s (~69 Gbps) after overhead, although Intel’s initial controllers will probably be limited to ~6 GiB/s (~52 Gbps) due to limiting it to four PCIe 4.0 lanes.
Part of that is due to USB4v1 only allowing 128 byte payloads in PCIe tunneling. Each payload has a 22, 26, or 30 byte header. That means anywhere from 15-19% of the bandwidth is used just for headers.
Whereas with 256 byte payloads (which are supported by USB4v2 and Thunderbolt 5) only 8-10% of the bandwidth is used for headers.
That helps a bit.
A single PCIe 4.0 lane has a theoretical speed of 1.86 GiB/s before any overhead. In real world tests it comes in around 1.4 GiB/s (12 Gbps) after overhead.
Oh wow, thank you for a nice clarification! Yeah, my data is definitely 2-3 years old at this point. I got a desktop more or less as soon as I learned about all that. I will educate myself further and will look into new enclosure options, thank you very much!
Yep, data from 2 years ago would’ve led to that conclusion. That was right at a time when controller chip innovations were focused on docking stations and there hadn’t been an innovation for controllers suitable for eGPUs in several years at that point.
A little over a year ago the ASMedia ASM2464PD hit the market and pulled ahead of the previous best eGPU controller (Intel JHL7440) by 1 GiB/s (3.6 GiB/s vs 2.6 GiB/s).
There are also several USB4v2 controllers right around the corner.
Intel has announced the JHL9440 and JHL9480 controllers. The JHL9440 is still a 40 Gbps controller but it features support for the new overhead optimizations from USB4v2 meaning it should be ~10% faster at ~4 GiB/s. The JHL9480 is an 80 Gbps controller, but is limited to PCIe 4.0 x4 meaning around ~6 GiB/s max.
Future USB4v2 80 Gbps controllers without that PCIe 4.0 x4 limitation should be capable of ~8 GiB/s. ASMedia’s ASM2892 might reach that, although they haven’t announced much about that.
Of course the ASM2464PD reflects the fastest that current laptops can keep up with (the controllers in the laptop limit the capabilities as well). The other controllers will need newer laptops to handle their performance.
Currently the market is in a non-optimal spot where only one eGPU enclosure (ADT-Link UT3G) uses the ASM2464PD controller I mentioned and it is a pretty basic enclosure. I think other brands of eGPU enclosures are waiting for the 80 Gbps controllers that are just around the corner.
Yup, that’s also was what I found. Will get by on OcuLink, and once the good stuff arrives, migrate to 80 Gbit/s stuff.
I’ve read that that 80 Gbit/s USB4v2 has the optional mode where they repurpose one 40 Gbit/s channel from TX to RX or vice versa so that you can get an asymmetric 120 Gbit/s host-to-device and 40 Gbit/s device-to-host. This seems super beneficial for eGPU scenario. Of course, would have to have the host controller be wired appropriately with 8 lanes of PCIe to the USB controller (if laptop Ryzen 7040 even uses a separate controller instead of implementing USB4 on the SoC), so an upgraded motherboard would also be required… Still, I’m getting real excited again, thank you!
Cause one of the downsides of OcuLink we won’t get away from for some time is the absence of hot plug support