idk about that, those new amd and even the 13th gen intel chips have quite a lot of horse power these days.
The intels have all full tb4 (usb4 except all the optional stuff is mandatory + a cert and marketing name from intel) ports (though each side shares 40gbps uplink to the cpu). On the amd side the chipset supports 2 full featured usb4 ports, with 40gbps uplink each, which manufacturers could use (or not if you want to do some artificial segmentation like lenovo right now). According to the spec sheet on the amd frameworks the top port on each side is going to be usb4, no mention on bandwidth but they’d have to deliberately mess with them to not get 40gbps on the ports, getting a tb cert for an amd chipset however may be less likely.
I am interested in USB4 as well. Hopefully it has full 40. It would allow you to connect all my stuff easily to my eGPU and be powered up with just one cable.
The bay has PCIe gen4 x8, so that’s half the throughput of a desktop motherboad, but a bit more than double what USB 3 eGPUs can get. For use cases where bus throughput isn’t the bottle neck, that should be able to reach desktop performance levels, as long as the CPU can keep up. Load times will be up to doubled tho
The bay also has a USB 2 pinout, so you can hookup all your input devices like it’s a dock or something.
This could allow for building a high-ish end gaming rig that’s also nice and portable for non-gaming uses. Even with a reboot cycle, which are what 6 seconds now a days, that’s an impressive range of use.
Y’know, on reflection, my attitude’s shifted. Any kind of handoff board that just exposes a full width slot, even if it only has 8 pins at PCIE gen4, would bring something novel to the table. It could be used as an ersatz eGPU module - requiring a separate power supply - but as it’s really just a PCIEgen4 x8 the user could employ any PCIE card they happen to want, quite easily.
So yeah, that could be a thing. Maybe not a great thing though; I’d expect an eGPU case over a 40gbps USB4 or TB4 connection gives pretty comparable results.
Tell you what though, if I ever stumbled across some guy with a 16" laptop that has a desktop GPU sticking up from a rear-mounted daughterboard with an ATX power supply humming on the table beside it… I’d probably want to shake his hand and take a photo, lol
It sounds like you want to use occulink instead of thunderbolt. gpd is making g1 egpu with that, and there’s a community member that made the “one dock”. occulink is effectively pcie over sas connector, people are connecting it with adapter cards in m.2 slots, could be a port on a mostly blank / battery / etc. module in the gpu slot
We just discussed the same idea for the 13". It seems difficult/impossible to do, but with the 16" it seems trivial as you have the PCIe connexion.
In fact, in a expansion bay module with nothing or with a battery, it would be really stupid if those modules did not have an oculink connexion since the data lines are unused and the oculink connection from pcie is very simple and cheap.
It seems like Expansion Bay system supports reverse power supply to mainboard and CPU (old battery can put in as a built-in power supply), so maybe we can make a module that supports PCIE connect and reverse power at the same time, just like TB4 but has much more bandwidth.
Yes, this is what I’ve been pondering around here as well. I’d like to have a proper docking station, maybe using OCulink, that could support a full-fledged GPU in there. It should have full lanes that are available in any case, and have connectors that can take the beating.
My use case is that for 90% of time I work with “desktop”, but occasionally have to go on the road. I use external displays, keyboards and stuff all the time. So having the GPU processing on the docking station has real value.
I’m seriously considering taking on such a project to create such a docking station for this, I have the engineering skills and manuf partners. What I’d need is some indication that its a worthwhile project with lots of potential sold units. So feel free to like this post!
why not a full size pci-e 16 slot with 8 lanes, along back edge, you would need an external power supply to power a desktop GPU card and need to remove the io bracket. it should work, may need a short pci-e extension cable if the height of gpu and socket don’t match.
Is it worth it to get an external gpu to hook up to the laptop? Looking for something I can hook up and game 4k to when home but also just carry around to do my normal work.
Possible, yes, “worth it” is a personal choice. You can see the prices on alternative GPUs, and factor in a performance hit over Thunderbolt 3’s 40gbps and the added cost of whatever GPU dock you choose.
There’s edgey Oculink stuff around you can play with already, they’re quite cheap (because Oculink is a pretty simple standard compared to Thunderbolt), and if they don’t work you’d find yourself buying a TB3 GPU dock and consigning the Oculink equipment to your box of “Technical curiosities” - always fun stuff to discuss with geeky visitors.
It’s just a pile of cash is all. But you totally can, and despite inefficiencies it shouldn’t end up a total loss. Worst case? You sell it and recoup some expenses.
Edit:
I thought about doing that, but I’m taking the 7600s module. Because pull-out GPUs is just awesome, and I’m only gaming very occasionally anyway. Being able to easily utilise it on the go is a plus. If it had been me 6 years ago, with more free time - I’d be doing the external system, or even building a full desktop as well, but then I couldn’t afford it in those days lol.
I’d very much like to have a “classic” docking station in the first place, like many (business) laptops used to have. I do have two of these for my current laptop at two different locations, both with an assortment of accessories attached (3D mouse, 2D mouse, external monitor, ethernet, power). Just throw the laptop on there, no fiddling around with USB connectors. If it can house graphics cards externally, that would be an additional bonus.
That would be true, but eGPU docks often include a few USB ports and possibly Ethernet to make them advertise as docks. To… mixed results.
I think I found an obscure one with dual cables specifically to give the GPU maximum bandwidth. It’s just not quite a mainstream market, so attracts limited investment. IMO.
It is a product that suits the Framework particularly well though, so it keeps coming up here. Really cool seeing some of what’s going on, too.
For the record i dont own a framework, but I use a 7940hs @ 90w(realistically 75w, stock cooler wont sustain 90w for more than 45-60 seconds, and i lack time to modify the stock heatsink or make one or use a aio on the vaporchamber opposite side of the heatsink fin stack) with a 6900xt paired over oculink 4i 4.0 (pcie x4) running a 4k 120hz tv, and i find more often than not my 6900xt bottlenecks my 7940hs in the odd arrangement of old/new games that i regular.
Its basically a 7700 non x equivalent, with a cache difference if i recall correctly. Im also using poor timing 5600mhz ram currently.
I see no reason why someone cant design a breakout to oculink from the two gpu connectors, the pinout is available. There are already x8 oculink gpu docks out there, I own one, high quality cables are lacking though.
Its up to the userbase’s determination.
Well, I am down for the FW16 for all those reasons. Only competition I wanted to wait for was the XPS16, and yes, it is extremely beautiful and much smaller with same screen size, but damn, I am not paying 2000+ Euros to be stuck with a RTX4070 with 8gb of VRAM and 60Ws in 2024. Sorry, that train has left.
So I hope to use oculink for a true desktop performance asap with the FW16 ;).
I think an Occulink x8 bay to replace the discrete GPU bay would be ideal for docking and getting the most out of any GPU you could attach. PCIE4 at 8X (16GBps) blows away USB4/TB3 40Gbps(5GBps) which isn’t actually 40Gbps because of overhead and encapsulation (more like 32Gbps) etc. Remember Gbps != GBps, they’re very different! 8bits = 1Byte.
The real issue is firmware and max Occulink cable length to ensure PCIE4 is usable and doesn’t fallback to PCIE3 (still faster at 8GBps, but somewhat a waste). Even PCIE riser cables were having issues with PCIE4 so you should expect Occulink 8x to struggle to get PCIE4 speeds without proper shielding etc. But at PCIE4 x8 you should get over 90% of any external GPU you can connect.