Hello, I just saw the LTT video and feel ashamed that I didn’t hear of this company sooner. After watching the video I had so many questions and ideas floating around my head that I need to at least get some off my chest.
First off, in the video Linus said that the laptop only had usb 3.2, which for me, was a drawback but then when I checked the product page it had usb4 on the type c ports! This has me a little confused, why isn’t the storage module thunderbolt/usb4 compatible (is it due to no space for a tb controller inside the module? If so is it possible to have this inside the laptop and and extra input/pin on the module to single to the laptop when to use the Tb controller). Also how does the pcie lanes get divided to the ports since you could have 4 usb4 ports plugged in or only one. Are they always split so each port has pcie x2 (or even x1) all the time or is there a controller to communicate between the laptop and modules?
I like the io modularity but I feel it could be more expandable if the modules were against each other instead of having a divider between them. This way modules that need more space can fit and even use both ports at the same time (there’s a common usb hub design for macs that uses both usb ports on the side). This would also give extra room to fit larger controllers to actually be able to control multiple ports. It could also allow space to use a medium module like a dvi port and have a tiny module with usb c or something so you are still utilising both ports. This would add the ability for developers to work with 4 different sizes (or more even) for whatever needs they could think of.
A bonus of having the ports next to each other could also mean larger ssd’s could potentially fit too. To fit standard nvme size it might need more room than just the two modules space in there current size so making them a little wider for standard ssd compatibility or adding a third module slot (or tucked away hole in the chassis specifically for the added length).
Lastly, having the ability to have ports in numerous different locations around the chassis would be great for customisability. Obviously not all of them would work at the same time but letting the user know this and having a controller that can disable the ports with a false module in it would mean that you could for example, have 4 modules all on the left or even the back so that you can use all 4 ports without getting in the way of your external mouse.
Thanks for coming to my Ted talk and sorry for the long post, looking forward to the discussions!
P.S. It’s 6am and I haven’t slept so I apologise if I’m not making any sense…
So I can’t speak to some of the things you mentioned but, while thunderbolt isn’t officially supported, it kinda works anyway. My understanding is they are simply waiting for official certification but in the meantime you can essentially use all four like a thunderbolt ports (I think there is some sort of a controller for each side but don’t quote me on that)
USB4 is the open platform version of thunderbolt 3, which is developed by Intel. They don’t need to have it ‘certified’ as long as it’s usb 4 compliant
That’s honestly amazing. Absolutely every other laptop has had to decide between how many thunderbolt/usb4 ports (more specifically how many pcie lanes they use/bandwidth to each port) but frameworks have supposedly solved that.
@Robbie_S Nice… I can only find USB4 as what they have listed on the product pages… has anyone seen what speed these ports are actually running? My naive understanding is USB4 can be 20GB/s but if it’s 40GB/s then TB4 compliance is likely.
This thread looked into it, and it looks like the laptop has 40gb/s per side, which can be funneled through 1 port at 40gb or both ports at some mix thereof. This seems roughly consistent with the sort of performance folks are getting out of egpus, but I wouldn’t mind some explicit confirmation from someone who has a laptop to run tests on.
Oh dang… I think long term if the company succeeds, it’s also completely feasible that a mobo change could increase speeds or change topologies to more gamer centric needs. But even still if one side can sport 40GB/s and the other won’t to support multiple peripherals… doesn’t sound like a big deal since there’s really not much need for two eGPUs at once IMO
Well now you’ve peaked my interest @ReUhssurance. I hadn’t even considered the possibility (not that I’d use it in any actual use case ). I need to try two egpus now. BRB
@Frosty I ran the tests on the main laptop screen. I did have one of the gpus connected to an external screen but I didn’t use it at all during the tests
@FaultedBeing I’ve no experience with eGPUs personally, but everything I’ve read is that mainboard screen is hindering to an external display. Is that not always the case?
Given otherwise identical set ups it is always the case, and my layman’s understanding is there are two primary reasons. Real compsci people please correct me away I’d love to learn more details.
First, with an eGPU the cable has to communicate all the data between the CPU and the GPU. So, with the data going to the laptop main screen, that cable has to handle all the data from the CPU to the GPU to get the correct information rendered, then it has to communicate the rendered frames back to the main board for rendering on the screen. Effectively this means each frame has to travel from the CPU to the GPU for rendering, then the rendered data has to go back GPU to CPU, roughly halving the bandwidth. Half the bandwidth means fewer frames.
Second, there’s a large time delay between not only communicating data over the cable, but also because of the thunderbolt protocol overhead. Effectively this means your GPU is getting data at a higher latency than a direct PCIe connection would give it. If you output from the eGPU to your monitors, this latency only factors in the one time. If you send the data back from eGPU to the laptop, your laptop has to do this conversion yet another time, further increasing the timing issue (and with real time image rendering, timing problems = garbage data = fewer frames)
My suspicion is that with both eGPUs going to main board, this second factor is the reason for an even further degraded signal. I have zero clue how the path for the data goes when you have two eGPUs feeding one laptop display, but the CPU is likely killing itself trying to figure out all the data coming from everywhere that’s also got signal delays and signal degradation from those delays… I’m not even sure that the pipeline is streamlined to any degree, both eGPUs might be feeding identical information that the CPU has to then spend time to reconcile, for all I know.