After the first time I upgraded the motherboard in my Framework 13 I couldn’t help but think “I wish I could have done this on the servers I managed in the datacenter”. I rolled the idea around in my head for a long time, wondering what a Framework rack server would look like.
I’m not an engineer of any kind, just an experienced IT guy who spent forever installing and upgrading and managing rack servers, and I have thoughts about the kind of server I wish I had. So here’s my idea, please humor me and give it a thought, and then tell me why you think this is good or bad or otherwise. I’m particularly interested in anyone with hardware design or engineering experience who can point out if I’m asking for something impossible (or just way too expensive).
The Framework Matrix, a flexible forever-platform
Philosophy: Bring Framework’s repairability and upgradability to the datacenter. Everything is user-upgradable and replaceable just like the laptops, and wherever possible uses commodity components. It is capable of running incredibly dense workloads and the widest array of interfaces and modules of any server platform.
Specs:
3U standard rack server, 32” deep
16x front 2.1”x2.6” bays, 15” deep (this is the full front of the chassis, one big open space).
12x rear 2.1”x2.6” bays, 9” deep (The rear has 12 bays instead of 16 to accommodate the 3 (n+1) power supplies and management IO)
Modules slide in with rails along the top edge and bottom edge (modules mount upside down in the bottom row)
Besides Framework’s philosophy, what makes this special is the flexibility of a PCIe 6.0 matrix switch. Each one of the 28 bays has a x8 PCIe connector on its backplane, and the chassis system board can map lanes from any bay to any other bay. (x8 because I think beyond that you have to switch to a much more expensive system to make a PCIe matrix, but that’s something AI told me so maybe not).
This means that I can put a compute modules front bays, and a bunch of nvme drives in a rear bay, or an OCP card, or a generic PCI slot, basically anything that can use PCIe lanes can go in a module, and the system can map hosts to devices regardless of where they physically reside in the system.
The chassis system board will also have connections for ethernet, bmc, serial, usb and power to each bay, and has an integrated 2.5gb ethernet switch with 2 rear-panel uplinks. Faster networking is available via a 4-bay add in module, and of course any host can connect to a generic PCIe slot module so you can add any kind of specialized networking or storage that you’d want mapped to any bay you want.
Most of these modules are essentially carriers for commodity parts, with maybe a few chips on the board (PCI repeater, SATA controller, whatever), but of course there will also be compute modules. Framework could make them in 2-4 bay widths, but you can also have modules that are multiple bays tall, or both allowing fairly sizable computers to be used.
For compute modules they could make a 1-bay ‘light’ node, a 2-bay ‘standard’ node, and a 4-bay ‘monster’ node. With 2 bays you can have a 4” wide and 15” deep motherboard, you could fit a lot of power on there.
A high-speed networking module could expose 8 100gb interfaces that can map to host bays, and on the outside it can present 2x400gb uplinks or more (actually with PCI 6.0 it might be double that). It would also interface with the internal 2.5gb switch. A simpler 10GB switch module could also be available.
Another module could be 1-bay wide, and it could be for clusters of tiny machines like raspberry pi compute modules, or just a ‘tinkerer’ module with no board, just the backplane connectors broken out to standard ones.
This platform has other tricks:
- a 4-bay wide front module can contain two mini-ITX motherboards one behind the other
- a 2-bay (vertical) module than can hold two Framework 13 motherboards, repurposed as compute modules. They’d have HDMI and USB on their outsides so you can get a console on them with a crash cart. Each one even gets x4 PCIe to the backplane from the m.2 slot if you’re ok using USB storage
- a 3-bay (horizontal) PCie module which can house almost any consumer-grade GPU (enterprise GPUs might fit into a 2-bay module, and perhaps even in the back - some are under 9” deep)
- a 1-bay NVMe storage module could fit as many as 24 sticks. I suspect cooling could be an issue at that point, but the density would be nuts. It would also be straightforward to make u.2 2.5” disk modules, or even SATA 3.5” disk modules, if you use 2 or 3 bays.
The chassis fans will be standard parts (92mm). So will the power supplies (M-CRPS). The modules’ specifications would be open so people could make their own easily. And of course there will be some laptop-style removable ports on the back to access the platform’s management interface.
Critically, the brains of the chassis consists of a single central board and two backplanes, and they can be upgradable – at some point in the future a user could upgrade to a PCI 7.0 system board with a new 512-lane PCIe matrix giving 16 lanes to each bay, who knows. The important thing is that the only permanent part of the system is the metal box.
And hey why not – the front bezel can house some of those little tiles from the Framework Desktop
Specs:3U standard rack server, 32” deep
16x front 2.1”x2.6” bays, 15” deep (this is the full front of the chassis, one big open space).
12x rear 2.1”x2.6” bays, 9” deep (The rear has 12 bays instead of 16 to accommodate the 3 (n+1) power supplies and management IO and buttons)
Modules slide in with rails along the top edge and bottom edge (modules mount upside down in the bottom row)
I’m interested in everyone’s thoughts.