Feedback for plan: cluster of 4 framework motherboards with RoCE netorking cards

Hi all,

I am planning the build an AI cluster from 4 Frameboard motherboards in a DeskPi RackMate T1 case. Similar to what Jeff Geerling has done, with the exception that I want way faster interconnect, so I want to add RoCE NICs to all units, like Donato Capitella has done.

So the motherboards would be sitting in 2U trays. Instead of the Flex-ATX power supply, I plan to use a fanless DC-ATX Converter (160 x 52 x 31 mm), which I can put somewhere else, so hopefully that leaves me a bit more space inside the trays, for installing the NICs.

I am looking at this NIC: Intel E810-XXVDA2. This is a low-profile card (167 mm x 68 mm), with a PCIe 4.0 x8 connection. I will need a 4x-to-8x riser cable.

I am looking for feedback. Do you see any obvious problem with this? Thank you.

1 Like

I love a PCIe device on a riser cable. I expect the cards to complain that they’re only getting x4 link bandwidth but, with high-quality cables, that should still be PCIe 4.0 speeds and 2 GT per lane sorta looks more than the 2x 25gbE of the low-profile E810 cards.

Don’t neglect airflow – these E810 server cards run warm-hot and can fail if you don’t push enough air over them.

1 Like

We’ve actually tried this exact setup, including with E810, and it works!

2 Likes