One thing that could solve quite a bit of people’s request and doesn’t sound too unreasonable is to expose an internal PCIe connector (Doesn’t have to use the same physical connector, could use a mezzanine connector like that found in the RPI4 compute module and have the schematics for an adapter). This would allow it to be more useful later down the road when a bigger chassis is made and a gpu module that connects to the PCIe connector is made to fit in said chassis. It also helps in other cases like when running the motherboard as an SBC or NAS where you can directly attach PCIe devices like a SAS controller or high speed networking. It’d also open up another small tinkerer community for people to mess around and push what they can do with it.
I know the thunderbolt 4 USB4 uses a few PCIe lanes plus the WiFi card and m.2 SSD so I’m not sure how many leftover PCIe lanes are there but in a future platform or something, this could be an option.
The CPU has 20 PCI-E lanes, WiFi card uses one, m.2 uses 4, thunderbolt needs 4 lanes minimum per port to handle full TB4 so that is 13 plus any integrated features and the motherboard may be only able to handle 15 lanes.
a single pcie gen4x4 exposed internally would still be plenty of bandwidth for an internally connected gpu daugherboard down the road. maybe a 3050/3050ti, or a low end quadro, in a slightly altered chasis or future 15/17" model that might use the same motherboard.
The Chipset should be able to handle 2 TB4 ports and wifi at full badnwidth. Even 3 TB4 ports would be possbile with shared bandwidth over the chipset. Future chipsets might be connected via PCIe 4.0 lanes with double the bandwidth. Maybe in the future we will also have TB4 controller that can use PCIe 4.0 instead of 3.0. Then you could have 2 TB4 port with only x4 PCIe 4.0 lanes.
Nvidia and amd would never allow their GPUs to be connected with anything less than 8 PCIe lanes. Every laptop has to go through a complex certification process to be allowed to use their GPUs. This is to make sure that the GPU runs the way it is meant to be and is not bottleneck in any way by insuffient cooling, power stages or bandwidth.
Sounds exactly like the already existing but barely used MXM (Mobile PCI Express Module) GPUs.
Nvidia and AMD have allowed thunderbolt solutions that ‘work’ with 20gbps / 40gbps bandwidth, depending on how many lanes laptop maker exposed; including solutions with soldered gpu’s. i.e. G0A10170UL lenovo thunderbolt graphics dock with gtx 1050, and the sonnet puck. pcie gen4x4 internal is equivalent on bandwidth to pcie gen3x8, which is plenty for any lower end / midrange laptop gpu.
agreed with MXM, except that unless you have a chonker designed for it, that’s a very thick solution that the entire chasis / cooling system would have to support on the card, and it’s more single purpose.
a pcie cable solution to a daughterboard would allow more flexibility in placement within the chasis, and allow for a wider variety of daughterboards, like maybe a proper bottom mount docking station someday. But that’s just wishful thinking. I’d take any option they feel like offering.
addendum: it would also be necessary to source low end MXM cards, due to thermal constraints of the laptop. I could maybe see a 35w version of the 3050 internally, 3080 ain’t happening. for such a weird SKU, proprietary is probably the necessary route, with a custom daughterboard maybe being easier to implement than adhering to mxm.
eGPUs and internal GPUs follow different ruling. Internal GPUs have strict rules on how you have to implement them. For nvidia GPUs this is called the Green Light Programm. Some low end GPUs like a MX450 might be allowed to be connected with only x4 PCIe 4.0 lanes but mid range GPUs will most likely need 8 lanes.
Maybe a PCIe riser cable could be used with the MXM port.
Not sure if a 35W GPU would make any sense. The entire sub 50W space will most likely be replaced by iGPU inside processors soon. Making a workstation/gaming laptop with GPUs up to a 3060 with 100-120W TDP would make the most sense price and perfromance wise. A 3060 at 120W TDP has more performance than a 3080 at 90W TDP.
Edit: With enough TDP and power headroom it would also be possible to upgrade the MXM module with a future GPU. With the additional cost of a MXM variant (daughter board, PCIe slot, space) it would probably make more sense to go with mid range GPUs
Another edit: Nvidia GPUs on laptops have to use a fixed refrece design from nvidia to my to my knowladge. There are different reference designs for different TDP classes. If there isn’t a design with less than 8 PCIe lanes than you won’t be able to use that GPU. With amd its not very different i think
I think i made one mistake in what i said regarding the TB4 ports. I looked at a few more datasheets of TIger lake and from the looks of it you don’t need to convert any PCIe lanes to TB4, 28W Tiger Lake cpus have a controller integrated inside the processor tat can offer a total of 4 TB4 ports. The only thing you need is a retimer chip on the PCB (i though the retimer chip on the framework laptop that i saw in some blurry photos was actually a PCIe tp TB controller). The 28W tiger lake processors also only offer a total of 4 PCIe 4.0 lanes which you can pretty much only use for a m.2 slot. The 45W tiger Lake H processors on the other hand offer a total of 20 PCIe lanes. In other word you can use all those 20 PCIe 4.0 lanes and you also have 4 TB4 ports on top. That would make it possible to have a dGPU, 3 m.2 slots (PCIe 4.0) and also have 4 TB4 ports on a laptop. In fact one of the Dell XPS laptops has a configuration similar to this
I just thought it’d be a neat concept for the next generation of boards like Intel’s 12th gen and more flexible than having an MXM connector on the motherboard. After reading the amount of PCIe lanes available, I don’t think it’s quite possible on the current 11th Intel CPUs without some major changes to the board itself.
I had this thought too- I was thinking it would be fun to replace the default 3.5 jack with something else, alas, I don’t even have a usb2 interface internally to work with!
Having just some basic usb pads on the mobo would be an easy interface for most projects that can conceivably fit within the spatial confines- simpler than PCIe to develop hardware for as well.
28W Tiger Lake CPUs should offer x2 USB 2.0 and x2 USB 3.1 Type A ports from the PCH as long they are not use for something else in the mainboard. Offering a few usb pads might be possible in future revisions