Rack mount setup?

The cutout is a nice idea, assuming nothing below it could make contact and a short somehow.

Sometimes when I’m worried about clearance and shorting, I line the surface with Kapton polyimide tape. It is cheap, heat resistant, non-conductive, and can be removed without leaving any residue. Larger sheets are tricky to apply without any ripples, but apart from being un-aesthetic, it doesn’t affect the performance of the sheet.

My 2U-2x-FWD build is currently stalled waiting on some parts, but usually when I’m starting out a build that is probably going to be tight on space, I line the case with it while test-fitting everything. That way I can actually POST different configurations of hardware, check things like if the risers or cable extensions aren’t affecting PCI bandwidth, try out different fan options under load / case closed, etc. all without worrying that something will accidentally touch the chassis and ruin the build.

Here’s an example product and very janky install of tape for my current test fit xD

https://a.co/d/2FF1ETe

1 Like

Thanks, I ended up throwing mine in a 3U Sliger case until I figure out something better. Figure I can reuse that case later when I do. Perhaps a custom duct to shoehorn it into a 2U, funnel air from the front fans over the heatsink horizontally instead of needing a thick fan + clearance vertically.

Edit: Reading the rest of this thread, folks are already on it. Nice.

I stumbled across this: Integrate Dual Framework Mainboards · Issue #43 · blake-hamm/bhamm-lab · GitHub

look nice.

My FWDx2 project has been going rather slowly, got sidetracked with waiting on bits and the a bunch of Linux automation & networking test fun. Anyway, it is at a reasonable point to share current progress - so here it is.

The Kapton tape I left in because the next step involves a mount that I’m working on, to situate a high-speed NIC inside the case mounted horizontally, one on each side. If things pan out, these will be Mellanox CX-5 Ex VPI cards in a P2P link via IB. But there’s a bunch of fallback options here based on how the power, mounting and connectivity pans out.

The only reason I am working w. this case first - slowly iterating on what exactly will work inside it -is because I really want to find a stable, non-jank way to get at least 2x25GbE networking in a 2U setup with 2xFWDs. The simpler approach would be to just deploy the 2 mainboards into a MyElectronics 19” 2U dual mini-ITX case with the XikeStor 2x10GbE RJ45/2xM.2 cards which are PCIe 3.0 x4 so “just work” without any fiddling. But I’m stubborn and would like to test faster networking and have free time so…

I’ll post pic updates when I get the NICs installed in this setup, assuming the mounting solution works out / is non-jank. If the 25GbE+ experiment turns out to be a complete bust I will drop back to the simpler solution mentioend above, the bits are on the shelf but I really want to test Infiniband first :smiley:

1 Like

Couple comments on aspects of the build - first, I really dislike that FWD didn’t pick an existing cooling system mount. Having their heat sink just chewing up most of the vertical space in the case is a big downer. I can’t even use the case’s horizontal PCI mount (which would also let me fit in the x4→x16 riser we need!) because the stupid heatsink is in the way. Would be much better to have the option of either mounting a lower-profile solution like a blower fan or an AIO.

Honestly, for me the two big misses in the FWD motherboard are the custom heatsink and the closed-ended PCIe slot. Almost every aspect of the build was complicated by these two limitations. I want to support Framework, I like their philosophy as a company, but this was a huge miss and is probably the main reason that I will jump ship to Minisforum for any new Strix Halo cluster builds once their BD395i MAX (page 26) comes along. BUT this could be saved by an aftermarket solution for the heatsink, because that would allow us to use better cooling (blowers/AIOs) while simultaneously using an x4→ x16 riser (fixing the closed-ended PCIe slot) and situating the PCIe card horizontally as is common in 2U setups. Two birds, one stone. But needs an aftermarket solution.

Anyway, enough ranting about FWDs tying my hands wrt cooling and PCIe cards. Since vertical space is at a premium in 2U and I can’t just “drill holes” in my 2U case lid because it will be right under another case, I wanted a lower-profile fan on the stock (ugh) heatsink. Tried the Noctua NF-A12x15 PWM, it was fine but it’s CFM & static pressure (55 CFM/1.53mm H2O) was under the Framework recommendation (>= 60 CMM/2.0mm H2O).

Eventually found the ThermalRight TL-B12015 Extrem low-profile fan which has much better numbers (83 CFM/3.4mm H2O) and that was a definite improvement under load vs the Noctua. Noise is great, temps dropped noticeably, very solid product. PSA: don’t be like me and first get the non-Extrem version by accident.

The front fans that came with the case were non-PWM, and largely crap. I tested a variety of the Arctic server fans (10K/7K) but settled on the P8 Max (5k) as the best combination of performance and noise. Even though this thing is going in a rack in the garage, noise is a small concern, and the 10k server fans were just insanely loud. And even taming the speed w. fan curves, the server drives were just way noisier at the same speed AFAICT, with either the same or worse cooling performance vs the P8s.

Moving on from cooling - headers & USB. This case’s internal front headers are fundamentally mismatched with what the FWD offers. I rigged up the power/reset switches/LEDs fine, but the USBs are currently disconnected. I need to find the right combination of type-E-to-USB-2 adapters, or just rip-and-replace the control board on the front of the case in favor of a more modern one. Fixing that + adding back the wi-fi antennas is a project for another week.

Lastly - power cables. The FWD stock PSU is nice, but the cables assume the placement is to the right of the motherboard, not to the left as it is in many other mini-ITX cases. It took a little bit of searching to find the right extension cables, the dual goals being “as short as possible” and “as flexible as possible” so the space between the motherboard and the front fans is unimpeded.

Alternatively, it might be possible to create a casing to direct air from the front of the case to the top of the heat sink.

If it work it is may be nice :wink: !

or:

re: directing air from the from case fans: might work but I’m using that space for the NICs and there isn’t much free space + I’d still want forced air over the NICs (100G NICs & their SFP modules run quite hot, at full clip in my setup ~20W of draw / heat.

More salient is that the heatsink itself (even without the fan) protrudes a tiny bit into where the horizontal PCI card sits. I confirmed this to be an issue on both of the 2U dual-mini-ITX cases (Rackchoice and Circotech).

TBH I’d prefer to just ditch the Framework cooler entirely and use something like a blower fan on the APU e.g. Dynatron Q7, or a low profile AIO e.g. Dynatron L17. Either of those fit in 1U which leaves plenty of space above for PCI card and airflow.

re: that case - cool, thanks for sharing. 4U is a lot of space to sacrifice, but their idea of putting the PSUs behind the motherboards thus freeing up the backplate space for PCI is excellent. I’d like to see a deeper 2U case adopt that layout, it would eliminate the challenges I’ve been having with my build.

I have search for a 3U case… but can’t find with this config…

I see a use of a Nvme2Pcie16 to connect a card on the front too : Desktop Motherboard Builds - How did it go? - #62 by Arkratos

Don’t know if it can feet Amazon.com: Wathai 120mm x 32mm 12V High Airflow DC Brushless Blower Fan for Radiator, Desktops, Gaming Consoles, TVs, Routers, Computers : Electronics (mount upside down) …

Update on the 2U 2xFWD build. Figured out a long term mounting solution for my ConnectX-5 cards and did some cable management to wrangle the clutter.

The PCB is mounted on magnetic feet, the PCI card riser is mounted to the PCB, and the card just slots into a powered PCI riser connected with Oculink.

Why Oculink? Because the Framework Desktop BIOS seems to have an issue negotiating Gen4 x4 specifically with Mellanox cards. It’s a long story, but the bottom line is that I had to use an M.2 slot for with an M2-to-Oculink adapter to ensure proper negotiation of PCIe 4.0 x4, and then put the displaced M.2 in a PCI-to-M.2 card - which happily did Gen4 x4 in the PCIe slot with no drama.

I tested a truly demented number of alternatives here to establish what worked and didn’t, and it really is just a Mellanox-plus-Framework-combination issue. Other Strix Halo machines and other (non-Mellanox) NICs don’t exhibit the bug. And tested with multiple FWDs, multiple Mellanox cards from different vendors/board designs/MCX generations - none negotiated Gen4 x4 right. There’s a detailed bug filed on this and Framework Support has acknowledged and escalated, but this was my workaround.

Had to use fiber internally because it’s a lot more compact & flexible vs DAC. The added bonus is that fiber can use couplers, so once I upgrade my switching infrastructure to 100G I plan to route the cables externally via a PCI blank with 2x keystone couplers on each machine.

DAC and fiber tested at identical speeds. Adding 2x couplers to the signal path (fwd1 NIC-cable-coupler-cable-coupler-cable-fwd2 NIC) didn’t degrade signal by any measurable amount. For now I’m just going P2P with one EN and one IB link to compare. The longer-term plan is an IB east-west link between the nodes and an EN north-south link on each link. Switching remains problematic, consumer switches don’t support PFC/ECN and even used data center stuff is still rather spendy. So for now I’m just running P2P links.

Yes, it’s crowded, but airflow is fine, temps are very stable even under load, and while I’ve got a few more tweaks planned the system is close to being declared as 1.0 and banished to the rack!

1 Like

Nice!
any network benchmark? (iperf / mpi …)