Why usb-c intead of pci-e?

I mean, wouldn’t it be simpler to have just pci-e on the motherboard and the expansion cards and the expansion to use a more standard connector(pci-e) with no in-between translations.
Just make some sort of pci-e x1 or x2 and make a power connector on the side, and make it an expansion standard, so if something needs more power than the pci has, it can take it from the power connector.

Also doesn’t the actual configuration creates some overhead in some cases?. I don’t know but I imagine the motherboard translating to usb and after that to wherever expansion you are putting in.

I was also thinking in the afterlife of the motherboard, if the it has 4 more pci-e slots (instead of usb), it will be so useful, wanna make a crazy nas, just add a lot of sata cards, want more lan, throw it in, need more usb for your desktop, no problem.

I make a post in the ltt forum a while ago, can i put the link here so I don’t have to duplicate it?

So you’d need a usb controller on the module if you wanted usb? Or hell a freaking gpu if you want displayport?

The whole point of usb-c ist that it can by design carry all kinds of stuff, it can do power (pd), usb (obviously), displayport (and from there hdmi via a passive converter) and all of that IO can come directly of the soc.

What you are describing sounds more like a desktop mainboard.

Anyway, the cheaply accessible pcie is in the nvme slot and the wifi slot that gives you 4+1 lane. Getting the other 8 from the thunderbolt ports is a lot more expensive and bulky.

So just slaping an hba (gives you as many sata ports as you want) into the m.2 slot and run a 2.5 gbit usb network card is probably your cheapest option. Though just using a desktop or server mainboard is going to be a lot less painful, especially since you’ll need a psu for the drives anyway.


I didn’t thought about the display, but good point, that could be cheaper

I am thinking about the penalty you get from connecting to a usb in comparison to pci-e and all the devices that can connect natively to it

But if the usb chip is on the motherboard, moving it to the expansion card is not unthinkable, if I think about all the things that can be attached to pci, and if I have a usb chip on the card, I don’t need a gpu for adding a display port.

The expansion cards should also have more things on, the space is there. In the past were more ports available on a laptop, they were slowly taken out to connect a usb hub/dock for things that were built in

And if the cpu has embedded usb/thunderbolt, put it embed it on the laptop, I can’t already use it as anything else without adapting/translating it

I am just curious, is not unthinkable, is not impossible, they have done a bigger brother of that for the gpu, and have more expansion options because is pci-e, there will be also more use cases for the afterlife.

Talking about desktops and laptops, I feel that the atx desktop pcs are daying, the laptops and nucs are getting so powerful now that it doesn’t make sense anymore having a gigantic box eating so much energy, so if pcs are going small and share the hardware, and framework suggests that my laptop is gonna have a second life as my desktop or server, it will be nice to have available more pci-e

The idea of having an expansion card that converted the USB port to PCIe was mentioned here before, and others brought up the fact that USB is rated for more plug/unplugs before wearing out. PCIe being an internal connector means that it isn’t designed for you to be swapping out the things attached to it all the time, so USB-C allows you to change expansion cards more often than PCIe.

1 Like

I’m not sure there is more stuff you can connect natively via pcie than usb (there is a lot of both) but most importantly they are not the same stuff.

Pretty much everything is completely or mostly on the cpu. Plus stuff like usb-pd is something shared across all prots, that would get really compicated if it was multiple different controllers.

Have you seen the development for the expansion cards, especially stuff like the usb-a+usb-c one? You may be massively overestimating how much space is in those things, fitting even a full blown pcie usb controller in one of those could be quite challenging not to mention all the other drawbacks like the power consumption, cost and complexity.

If you look at semi modern laptop schematics you’ll find the built in usb ports are already quite utilized (webcam, touch, touchpad, fingerprint reader …), manufacturers sometimes even go off spec a little if the run out like lenovo using the 3.0 and 2.0 section of a single port for different things on some models. If you do not use certain things there are connectors on the board that carry usb you can use. Framework has published a pinout for most connectors on their github. While I would prefere having the schematic that is pretty good too. Thunderbolt is also mostly on the cpu.

It is not unthinkable but while it would make a pretty neat sbc it would make for a pretty terrible laptop, USB-C is pretty much made for this, it can hotplug (which pcie only kinda can and only with speciffic hardware and software support) and carry a wide variety of things, especially things you’d actually use on a laptop. Putting separate controllers on each port would definitely have it’s pros but it would send cost and complexity through the roof. Then there is that tiny little issue of power consumption, for a nas noone cares about a watt or 2 extra of idle power but putting controllers on each port would absolutely murder battery life even with the most gracefully sleeping pcie usb controllers and whatnot.

The gpu module there is a bit of a speical case and you’ll notice the 16 inch still has usb for the regular stuff.

Mobile stuff is indeed getting more and more powerfull but desktop stuff is getting even more powerfuller XD. Stuff making sense or not sense is a bit of a individual case. I can very much belive that it does not make sense for you.

Anyway, if you slap an hba in the m.2 slot and get a solid usb nic the current board allready makes a pretty sweet nas, especially if yoou don’t want to go 10gig.

And if you want a board with lots of little pcie slots, you can get those mining board with tons of pcie 1x slots really cheap right now.

While I personally would like to have access to more pcie on the board the connectors being usb-c makes a lot more sense for a laptop.


Adrian_Joachim: you are right, the p&p thing is not a thing with pci-e, that is a problem and to make it p&p will be more engineering work, and looking from the laptop side, usb is the best option, what I mentioned goes more to the second life or diy what is not the main point

It’s not a thing by default, it is a thing though but it is a pain in both software and hardware. In the server space it’s more of a thing, though these days a server is more likely to be replaced than having parts replaced while it’s running. Helle there even was hotplug ram at some point, imagine that.

Second life as a mid tier nas is still very much on the table though, putting a hba on the nvme slot would give you enough bandwidth (~4gb/s for a 3.0 one or ~2gb/s for a 2.0) to saturate a 2.5 or 5gbit usb network card (or hell one of those thunderbolt 10gbit ones) which is probably enough for most people:

  • 10-20$ for m.2 to pcie adabter
  • 50-100$ for an hba with 8 ports (if you need more than 8 you can use sas expanders)
  • 20$ for a 2.5Gbit usb-c network card
  • ?$ for a case
  • ?$ for an atx power supply

Also if you just need more pcie slots but not necessarily the bandwidth, you can get some of those cheap 1x to multiple 1x pcie splitters that were used for mining, bandwidth sucks but you can put a lot of devices on there.

1 Like

Maybe can framework explore adding tow (one on each side) of pcie slots 1 or 2x for more deeper expansion. Design some pcie 4x connector with power(for the use cases in 1 to 4 lines and a standardization to put them side by side, letting it open for 8 to 16 lines use cases), doesn’t matter if is just your design, if it is open, someone else will adopt it too (right now there is a need for such a connector, people are using the oculink that is not 100% intent to, and if a USB c can carry dual 4x plus USB, power, and something more, it sure is doable some compact pure pcie+power connector)(I almost can hear Wendell), so that it will at least be the expansion option, and the space can be a bit bigger than the actual or just let the expansion cards grow under the laptop making it thicker

that also makes me think that it could be one of this type of pcie connectors inside, not just to accommodate some pcie device but also for chousing between expansion options, like adding some ai module or storage, etc, going one step more in the way of modular laptop.
There is sure some free space between the board and the case that can be used to accommodate another board, I mean the cooling system is thick enough to let free space around it but over the board.
I think that for a team that is able to design a motherboard, is not a big challenge a daughter board connected with a pcie link, imagine adding 1, 2 or more ssds in a laptop, like the desktop pcie bifurcation m.2 cards, if someone makes something like that, every device will be benefited from that, handheld, mini pcs, laptops, the age of compact devices will rise and having full pcs smaller than a mini itx will be normal, like atx pcs, or embedding a PC into some furniture, peace cake.

I was asking me to, why are no m.2 slots on the opposite side of the board, on the back?, there is an unused area on the back and it also could be accessible just by taking a cover off

omg if the word were not drived from stupidity and instead by the love of making things better

In addition to the reasons others already listed there’s also the issue of bandwidth.

When the original Framework Laptop 13 released the available CPUs at that power level had very limited PCIe capabilities.

The CPUs that Framework used initially have 63 Gbps of normal PCIe bandwidth (all of which is currently used for the SSD), 31.5 Gbps of PCIe bandwidth available through OPIO (some of which is used for Wi-Fi, fingerprint reader, and touchscreen support), and up to 155 Gbps of PCIe bandwidth available through the USB4/TB4 controllers (both controllers can operate two ports at full speed).

So over 62% of the possible bandwidth is through the USB-C controllers, making them the obvious choice to route the IO through.

That means that to use PCIe for the ports they would either have to take away bandwidth from the SSD (maybe move the SSD to OPIO?), share the very limited OPIO bandwidth among the IO, or use internal USB4/TB4 PCIe controllers to turn the USB connections into PCIe lanes.

That last option is the only one that doesn’t compromise on bandwidth, however that can also just be done inside the expansion card if a card needs PCIe.

The newer chips are a bit better with the newer Intel models having 126 Gbps of normal PCIe bandwidth (instead of 63 Gbps) but still the same 31.5 Gbps available through OPIO and 155 Gbps available through USB4/TB4.

On the AMD side of things it is allocated a bit differently with 315 Gbps of normal PCIe bandwidth and only 77.6 Gbps available through USB4/TB4. In that case PCIe based expansion could make sense. Then again, additional USB controllers could be added to convert some of the PCIe bandwidth into USB bandwidth. Ultimately I think the biggest reason is (as other people mentioned) that USB4 can carry USB, PCIe, DisplayPort, and power whereas PCIe can only carry PCIe making USB4 far more versatile.

On the backside picture of the mainboard in the marketplace I can see the M.2 2230 Port to actually be on the backside of the mainboard, accessible through an aperture, as introducing covers to the chassis would make it less durable and complicate its fabrication process needlessly.

The mainboard has a usable PCIe-connector in form of the interposer for the expansion bay.
PCIe not being hot-swappable has led to that design and imho it’s the best compromise Framework could have come up with. Imagine all the expansion cards not being hot-swappable anymore, having interposers instead, requiring a shutdown and driving screws to be swapped… wouldn’t that be annoying?

in a motherboard many things are behind the northbridge which is only x4, so its not a big deal I think, and daisy changing works ok, at least I can remember it from the gamer nexus amd video.

I was not thinking on the side of the hot swap, only on how much more efficient pcie is than usb4 for latency and bandwidth, no translation needed, more direct communication between the hardware, cant remember all the things Wendell explained but pcie is way better than usb4 or tb, for the high bandwidth devices at least, and the thing that everything is intended to communicate throw pcie, so why putting something in between. Also i was having in mind for more permanent upgrades choices, not intended to be changed daily

I get the point, Im not gonna change the SSD every day so is not really that important.
I know it has the pcie for the GPU, but i mean more, if there is one more so you can choose to add more storage or some other device you need, for example, if you are a gamer, you will use the GPU, and you will need storage, nowadays 2tb is like small for a games drive, and i like to have the os running on a different drive than the storage, so if anything happens you can disconnect the storage and reinstall with no risk (i guess in this case will be nice having the drive easy accessible). Or if you do video editing, you will use the GPU and storage. Or if you use linux but sometimes have to deal with windows, because of work or anticheat. For me 2 is a reasonable minimum but when we speak for having tow os, you have not anymore your independent storage drive.

if i remember well Wendell mentioned that in the beginning pcie was a server solution for linking servers or so, and was intended to be hot plug, that never happened, but was the plan.

One can also argue that there will be a bottleneck with x4 but I am not going to hit all drives at the same time, only one, or even having pcie 3 x2 could be acceptable, but now with pcie 4 and 5 x2 seems more up to date. If I can add 2 more nvme to the fw16 under the GPU it will be nice, and there is enough space there for 4 between the fans, that will make the laptop thicker but for the people that are used to workstations laptops is nothing new and it will still be smaller than a mini itx build.
An x4 expansion with some bifurcation chip, 4 nvme, and a connector to daisy change further, and if that teck wasend that expensive, it can also have a USB4 by it, like the gpd g1, so that it will be more compatible and long lasting.
The khadas mind is also a interesting concept, not sure if that is really useful because It is only for that device and usb is more a standard that can be used with many other devices (and a pcie connector like okuling can also become a standard, since pcie is not going to desapear), but it is an expansion, it is pcie and it is hot plug, so its doable