@Filip nice work on getting this going!
I dont know if you saw but I pushed a generic PCIE config to the EEPROM generator repository last week. This should fix your PEX behavior as well as implement 8x1 pcie. I am guessing you already got 8x1 working looking at the comments!
Oh thank you! I did create my own generic pcie config by modifying the repo myself, but I might’ve configured the pins wrong, so I’ll definitely try yours out. But yes, I basically copied over the M.2 config and removed stuff like the power enable and changed it to a generic pcie accessory with 8x1, that’s why I managed to get x8.
I did actually manage to get the EEPROM flashing working myself through ectool, but I’ll definitely switch over to framework tool for that.
Oh My GOD! I have been waiting for this post for two years, literally. Ever since I had pre-ordered my Framework 16 and waited for it to arrive. And then, getting close to a 1000 posts and years later in this thread, having realised that the company Framework itself won’t make it, @Filip comes with the update. My goodness. This is it!
So, a couple of points:
first of all, I would please like to buy one when it is ready.
could you please post the link to the exact cable that you used? I am based in Europe and the cable I found is this one for more than 50 Euros.
Needless to say, please keep us updated!
Damn… It finally happened. Absolutely amazing work.
For anyone wanting to buy these boards. I cannot make any promises yet, as I still want to do a final run of these boards and there’s still some testing left I’d like to do, like the different EEPROM data that was mentioned above. I would also need to source some parts that I would have to add myself. I’d most likely do enough to just make back for the funds I invested into prototyping.
But there will be gerbers on my repository after I finalize the design that can just be plugged into JLC and ordered (although sourcing the oculink/nano-pitch connector is slightly annoying as JLC does not stock them and there’s a few week wait for them to arrive at their warehouse. There’s also the issue of getting the screw nuts for the connector between the mainboard and the expansion board, and the mounting screw standoffs).
It is a 4-layer board. For this run of the board I used the JLC04121H-1080 stackup. For impedance matching I just used JLC’s impedance calculator and like you can see it worked pretty well. I did 4 mils spacing and ~4.3 mils trace widths for the differential pairs.
I’m also based in Europe. Amazon is a pretty bad deal for these things. I just went with this listing on AliExpress for less than half the price. The cable I received has Molex branding, although I do not know if it is genuine or not.
Woah, I did not expect this to be that huge of a hit lmao.
It would be excellent if there were a half-height, 8x variant of the SFF-TA-1032 standard, but unfortunately the companies actually paying for this development effort have little need for homebrewer crafted laptop solutions. They’re doing it specifically to carry as much bandwidth as possible.
Of course, OcuLink was never meant to be used externally either. So folks endeavoring to bring PCIe 5 and higher to their external enclosures will probably want to look at MCIO. That was the Oculink successor (SFF-TA-1016), and has a high parts supply due to its pervasive use in the server industry at this point. MCIO connectors can hit around 8mm, so there’s still a good chance of packaging one of those up in this shell and using that solution. Chicken / Egg scenario though. Just not a lot of eGPU docks out there that have introduced the MCIO Connector, but I expect that to change as PCIe 5 GPUs become more common.
Good to know! I found this thread earlier today, it was linked from reddit. I’m a software developer so a lot of these hardware standards are hard to follow too precisely lol.
But if I understand correctly, it means that as we move more into PCIe and beyond, even oculink will show more performance loss due to the oculink cable being a bottleneck?
Oculink as a standard makes specific guarantees. PCIe 3.0 speeds, at up to 2 Meters in length, when used in an internal, qualified system.
As you can see from this thread, an entire cottage industry mostly supported by Chinese companies went “nah” and started designing and selling eGPU Docks that blatantly disregard the Oculink Standard. These have had a lot of success. Cutting down the cable length from 2M to 1M helps quite a bit as well. Users moved PCIe 4.0 signaling over the cable even though the cable isn’t qualified for it.
When it doesn’t work, what usually ends up happening is that the device down-revs its standard to train a stable link. PCIe natively supports that behavior as part of a standard, and likely its what we’ll continue to see. Theoretically a PCIe 6 device could be tossed on the link, and if behaving appropriately, under load the device may simply down-link to PCIe 4, 3, or less until it finds a stable link rate. Whether or not that provides satisfactory performance is of course a whole separate issue. I made a long post earlier about the notion of compromises, which this is. Using an Cabling standard designed for internal only use and only qualified for PCIe 3 instead as an external cabling standard at speeds beyond what it was qualified for is most certainly a compromise. For all of these things, PCI-SIG has a set of qualified standards (including CopprLink) available for qualified designs to guarantee performance all the way through PCIe 7, but the folks in this thread are not the clients that pay the obscene amounts of money needed for those sorts of solutions, so we pick and choose and pray on our compromises
No worries at all at least from my side, the question was rather if it could be bought at a later stage, as I fully understand there are still many steps to go through until it is finalised. However, I want to clearly say: please take my money. I need one. In any case, I first have to test thoroughly a 4i version from @Kyle_Tuck . So in the end the community has two great solutions: clean 4i plus storage, and clean 8i. Amazing.
Thanks - any benchmarks of real-world fps differences would be really cool in due course.
Just bought the Amazon cable - I want to make sure I have a “good” one.
It is. Several people have done valiant efforts in trying this and did not succeed/abandoned. One person (no names) even took money and did not finish. And now framework staff comes in the chat greeting and congratulating, and tech news articles are being made.
This is the 8x Oculink cable I purchased in advance hoping for a board for this to finally come out. I hope it does the job & had a massive drop in price (Please note I’ve not tested it as yet). They have some stock left for any other’s chasing one:
The calculator has a differential pair setting, so I just plugged the 85 Ohms there and set a 4 mils trace spacing, which gave me 4.29 mils trace width for the 1080 stackup with 4 layers and 1.2mm thickness. That’s basically all I did and that was what did the trick.
Yep, don’t dismiss the 4i version. I’m absolutely thrilled to finally see an 8i version viable, and not necessarily requiring a redriver is the cherry ontop - but for people that do appreciate the extra storage slot, remember that 4i is still very much viable.
I’m not sure I’ll upgrade, since my old PSU limits me much more than the 4i link (Have to reduce power on my RX6950XT to make it stable in all games). But it is tempting.
Thanks @Filip for finishing what I couldn’t!
There might be more margin to use the 13mm CDFP on the back of the laptop where it connects to the PCIe x8, compared to the modules that are designed for the plugs in the side of the laptop. It still is a bit too big to comfortably fit in such a module though, with that I agree. The initial GPU module that was provided for the back of the laptop ended up lifting the laptop up a little bit as well due to it’s extra thickness after all.
@Kieran_Levin I have finally gotten around to testing the generic PCIe configuration. Unfortunately the behavior with PEX_RST (SA5) not working, but EXT_SSD1_RST (PF3) working does not change. So there is no response from the GPU or any enumeration using PEX_RST until EXT_SSD1_RST is connected as the PCIe reset signal to the GPU.
I compiled the tool without changing anything and then generated the binary using gpu_cfg_gen -bs FRAOCULINKTERRAILS and flashed it using framework_tool --flash-gpu-descriptor-file. I additionally turned the verbose flag in the generator tool on and read the binary file using the -i flag and gotten the following output:
Build: Dec 7 2025 22:42:57
Descriptor
Magic 32AC0000
Length: 48
Desc Version: 0.1
HW Version: 0008
HW Rev: 0
Serialnum: FRAOCULINKTERRAILS
Desc Length: 92
Desc CRC32: E4D03431
CRC32: 4B5563AF
---
Type: PCI-E
Lanes: 8X1
---
Type: Fan
ID: 0
Flags: 0
Min RPM: 1000
Min Temp: 0
Start RPM: 1000
Max RPM: 3700
Max Temp: 0
---
Type: Fan
ID: 1
Flags: 0
Min RPM: 1000
Min Temp: 0
Start RPM: 1000
Max RPM: 3700
Max Temp: 0
---
Type: Vendor
Value: PCI-E Accessory
---
Type: GPIO
GPIO 1
Name: GPU_1G1_GPIO0_EC
Function: High
Flags: (Output,Low,)
Power Domain:S3
GPIO 2
Name: GPU_1H1_GPIO1_EC
Function: High
Flags: (Output,Low,)
Power Domain:S3
GPIO 3
Name: GPU_2A2_GPIO2_EC
Function: Unused
Flags: (Input,)
Power Domain:G3
GPIO 4
Name: GPU_2L7_GPIO3_EC
Function: Unused
Flags: (Input,)
Power Domain:G3
GPIO 15
Name: GPU_PCIE_MUX_SEL
Function: High
Flags: (Output,Low,)
Power Domain:S3
GPIO 16
Name: GPU_VSYS_EN
Function: High
Flags: (Output,Low,)
Power Domain:S3
GPIO 18
Name: GPU_FAN_EN
Function: High
Flags: (Output,Low,)
Power Domain:S0
GPIO 19
Name: GPU_3V_5V_EN
Function: High
Flags: (Output,Low,)
Power Domain:S5
which seems to match the configuration in pcie.c besides the manually specified serial. framework-tool --expansion-bay also has no issues reading the configuration and displays no errors.
Only thing is that I am not using the latest EC firmware included with the v4.02 BIOS package, as I have some power limit modifications I do on my own (for full performance using a 98W charger instead of 100W due to my Caldigit TS4 dock exposing itself as 4.9A 20V charger), so I do not want to upgrade until the updated version is up on the EmbeddedController repository and I can apply my patch. Which is why I am unsure if there were changes that could affect this behavior between the current EC repository state and the changes in v4.02.
Hi, Sorry for being a bit unaware of the technical aspect that you guys are working one. I’m trying to understand, how close are we from a oculink board for the FW16? Is it worth waiting for one to be released, or should I use the m2. expension board with a m.2 to oculink adaptator?
Also what is the difference between 8i or 4i? I’m guessing 8i is faster?
I’ve connected an RTX 4090 to my 2nd gen framework 16 over USB 4. It runs fine. Claims about bandwidth throttling are overblown. If an oculink board releases, great. If not, you’re still playing games on 4K high settings @60FPS
I’m curious about this as well, except “overblown” and “fine” are meaningless to me. Some real world benchmarks now that an 8i solution exists would be great.