@Filip nice work on getting this going!
I dont know if you saw but I pushed a generic PCIE config to the EEPROM generator repository last week. This should fix your PEX behavior as well as implement 8x1 pcie. I am guessing you already got 8x1 working looking at the comments!
Oh thank you! I did create my own generic pcie config by modifying the repo myself, but I might’ve configured the pins wrong, so I’ll definitely try yours out. But yes, I basically copied over the M.2 config and removed stuff like the power enable and changed it to a generic pcie accessory with 8x1, that’s why I managed to get x8.
I did actually manage to get the EEPROM flashing working myself through ectool, but I’ll definitely switch over to framework tool for that.
Oh My GOD! I have been waiting for this post for two years, literally. Ever since I had pre-ordered my Framework 16 and waited for it to arrive. And then, getting close to a 1000 posts and years later in this thread, having realised that the company Framework itself won’t make it, @Filip comes with the update. My goodness. This is it!
So, a couple of points:
first of all, I would please like to buy one when it is ready.
could you please post the link to the exact cable that you used? I am based in Europe and the cable I found is this one for more than 50 Euros.
Needless to say, please keep us updated!
Damn… It finally happened. Absolutely amazing work.
For anyone wanting to buy these boards. I cannot make any promises yet, as I still want to do a final run of these boards and there’s still some testing left I’d like to do, like the different EEPROM data that was mentioned above. I would also need to source some parts that I would have to add myself. I’d most likely do enough to just make back for the funds I invested into prototyping.
But there will be gerbers on my repository after I finalize the design that can just be plugged into JLC and ordered (although sourcing the oculink/nano-pitch connector is slightly annoying as JLC does not stock them and there’s a few week wait for them to arrive at their warehouse. There’s also the issue of getting the screw nuts for the connector between the mainboard and the expansion board, and the mounting screw standoffs).
It is a 4-layer board. For this run of the board I used the JLC04121H-1080 stackup. For impedance matching I just used JLC’s impedance calculator and like you can see it worked pretty well. I did 4 mils spacing and ~4.3 mils trace widths for the differential pairs.
I’m also based in Europe. Amazon is a pretty bad deal for these things. I just went with this listing on AliExpress for less than half the price. The cable I received has Molex branding, although I do not know if it is genuine or not.
Woah, I did not expect this to be that huge of a hit lmao.
It would be excellent if there were a half-height, 8x variant of the SFF-TA-1032 standard, but unfortunately the companies actually paying for this development effort have little need for homebrewer crafted laptop solutions. They’re doing it specifically to carry as much bandwidth as possible.
Of course, OcuLink was never meant to be used externally either. So folks endeavoring to bring PCIe 5 and higher to their external enclosures will probably want to look at MCIO. That was the Oculink successor (SFF-TA-1016), and has a high parts supply due to its pervasive use in the server industry at this point. MCIO connectors can hit around 8mm, so there’s still a good chance of packaging one of those up in this shell and using that solution. Chicken / Egg scenario though. Just not a lot of eGPU docks out there that have introduced the MCIO Connector, but I expect that to change as PCIe 5 GPUs become more common.
Good to know! I found this thread earlier today, it was linked from reddit. I’m a software developer so a lot of these hardware standards are hard to follow too precisely lol.
But if I understand correctly, it means that as we move more into PCIe and beyond, even oculink will show more performance loss due to the oculink cable being a bottleneck?
Oculink as a standard makes specific guarantees. PCIe 3.0 speeds, at up to 2 Meters in length, when used in an internal, qualified system.
As you can see from this thread, an entire cottage industry mostly supported by Chinese companies went “nah” and started designing and selling eGPU Docks that blatantly disregard the Oculink Standard. These have had a lot of success. Cutting down the cable length from 2M to 1M helps quite a bit as well. Users moved PCIe 4.0 signaling over the cable even though the cable isn’t qualified for it.
When it doesn’t work, what usually ends up happening is that the device down-revs its standard to train a stable link. PCIe natively supports that behavior as part of a standard, and likely its what we’ll continue to see. Theoretically a PCIe 6 device could be tossed on the link, and if behaving appropriately, under load the device may simply down-link to PCIe 4, 3, or less until it finds a stable link rate. Whether or not that provides satisfactory performance is of course a whole separate issue. I made a long post earlier about the notion of compromises, which this is. Using an Cabling standard designed for internal only use and only qualified for PCIe 3 instead as an external cabling standard at speeds beyond what it was qualified for is most certainly a compromise. For all of these things, PCI-SIG has a set of qualified standards (including CopprLink) available for qualified designs to guarantee performance all the way through PCIe 7, but the folks in this thread are not the clients that pay the obscene amounts of money needed for those sorts of solutions, so we pick and choose and pray on our compromises
No worries at all at least from my side, the question was rather if it could be bought at a later stage, as I fully understand there are still many steps to go through until it is finalised. However, I want to clearly say: please take my money. I need one. In any case, I first have to test thoroughly a 4i version from @Kyle_Tuck . So in the end the community has two great solutions: clean 4i plus storage, and clean 8i. Amazing.
Thanks - any benchmarks of real-world fps differences would be really cool in due course.
Just bought the Amazon cable - I want to make sure I have a “good” one.
It is. Several people have done valiant efforts in trying this and did not succeed/abandoned. One person (no names) even took money and did not finish. And now framework staff comes in the chat greeting and congratulating, and tech news articles are being made.
This is the 8x Oculink cable I purchased in advance hoping for a board for this to finally come out. I hope it does the job & had a massive drop in price (Please note I’ve not tested it as yet). They have some stock left for any other’s chasing one:
My 4-lane Oculink project run into “too much self-inductance” before any of that, due to not having good clearance to keep the lines close together (I imagine).
Literally planning to run on high-loss FR4 to try to increase capacitance as much as possible, but does that lead to higher signal losses?
To be fair I think the spec ask for “no more than -22.5dB” (PCIe 3.0 backward support), which is quite a lot, even if you assume 2db per inch of PCB. The Teflon Oculink cable is very low-loss, Including the connectors its like -6dB, you can probably screw up massively the adapter PCB (especially in my case when the trace is less than 1.5 inch) and still have room to spare with a passive (no repeater) setup, which compensate for 15dB.
If you run striplines (mid-layer) though you will have massive capacitance and you will probablt need low-loss PCB material, but apparently striplining PCIe is not recommended.
…
Apparently you want 6.5dB budget for PCIe cards. That’s steep. 17dB is kinda steep, considering we are crossing 4 connectors. Though definitely doable
The calculator has a differential pair setting, so I just plugged the 85 Ohms there and set a 4 mils trace spacing, which gave me 4.29 mils trace width for the 1080 stackup with 4 layers and 1.2mm thickness. That’s basically all I did and that was what did the trick.
Yep, don’t dismiss the 4i version. I’m absolutely thrilled to finally see an 8i version viable, and not necessarily requiring a redriver is the cherry ontop - but for people that do appreciate the extra storage slot, remember that 4i is still very much viable.
I’m not sure I’ll upgrade, since my old PSU limits me much more than the 4i link (Have to reduce power on my RX6950XT to make it stable in all games). But it is tempting.
Thanks @Filip for finishing what I couldn’t!