Rackmount Modular Framework Server

Thanks @Peter_Schofield I’ll have a scour of Reddit to see what I can find, I’m on SFP+ currently which is fairly inexpensive, so I’m expecting the options for this use case to be prohibitive cost wise. Hopefully soon 10G will become mainstream in the next 5 years or so

Take this as an example:
Enclosures – SuperBlade® | Supermicro

TDP of 165W each blade.
Supports 14 blades in the 4U enclosure.

From an air flow perspective, that’s sufficient air (however many CFM is provided by that design) can cool TDP requirement of 2310W. The framework mainboard is far from that…as its CPU only goes to 28W sustained per unit. (60W peak for 10-15 seconds if you’re lucky). In short 4U is not the bottleneck here. (You said it like it’s impossible. I’m just saying it’s already been done. Feasibility is not the issue here.)

If you’re purely looking at the CPU’s sustained TDP of 28W (disabling PL2), then it’s just 560w. Any low-mid end dual xeon graphics workstation is around that (as they’ll have a quardo or similar in there as well).

For reference, a 12900k with 3090 ti has a base max TDP of 150W + 350W.

As for the design, don’t have the need to design one. It was just a thought… “I would think”. Not an all out attempt to build one.

This, agree…too noisy in the home office.

Also agree…if they’re placed to work against each other, that is. Not sure why anyone would do that. And where are the two fans coming from?

p.s. There’s an online calculators somewhere that calculates how much air is needed to discipate x amount of heat taking intake air temperature into consideration. I don’t know the formula though.

@A_A You keep ignoring airflow… and trying to argue a point not relevant on a post that is not asking the questions you are answering.

Yes it absolutely is possible to build 20 units of computes in to a 4U space, IF you have designed the board for that purpose!

Actually look at those boards, they are not laptop boards, the fan is not mounted in the middle of the board using heat pipes they are chassis mounted high cfm fans, which you would expect in an actual blade server, they are noisy whining high pitched high cfm fans blowing directly front to back as you would expect in a rack, parallel to the board and heat sink as you would expect in a server.

However the framework main board is a laptop board, its design being fundamentally different, it takes air in from top and bottom and ejects it 90 degrees to the side, you stack them then you have two fans competing against each other, plus you are then ejecting air up in to another server mounted above it and not to the back, these are all things you need to consider.

Unless you have something constructive to add to this project then good day to you.

Alright, you have a good day as well.

This is awesome, I’ve been thinking about doing this for a little while as well but have no access to a 3D Printer. I would love a solution that uses the Framework motherboard with a i7-1185G7 (vPro) and a vPro capable 1/2.5/5/10GbE network card (as a poor man’s IPMI), ideally dual. Having the battery built into the 1U would also be nice as it’s a built in UPS. Making the framework USB-C expansion cards break out to the front of the 1U case (next to the fans to cool the system) would be pretty cool as well so the server could be configured to the users need very easily. Want to plug a monitor into your server? Just plugin the Displayport or HDMI card. The size would be awesome around the UniFi Dream Machine Pro, and could screw into the UniFi Tooless Rack if you place the screw holes in the correct location on the side.

1 Like

Definitely going to keep track of this project :slight_smile: Nice work.

I’m debating how to make my homelab lower power - but my main issue is storage. I need 4-6 3.5" drives. Putting an HBA or Raid Card on USB-C/TB3 doesn’t seem too attractive an idea (if I ever end up upgrading my Framework’s Motherboard).

1 Like

I’m a little conflicted on this one, as it’s never really a good idea to keep laptop batteries charged like this and to accommodate would likely put this in the hot zone next to the CPU , I’ll need to have a look at dimensions, if I can do it safely from a cooling perspective and can make it fit then I’ll leave it up to the user to decide if to install…

This one is already part of the design, the bottom two slots on the left are exactly for this purpose, with extra height to take double height cards (mostly to accommodate ethernet). The top holes I’m still playing with to look at the potential designs, but it’s layout currently allows a further two holes above so you could optionally route the back two to the front. I’m looking at options for trying to fit a half height PCI slot which ideally I also wanted on the front, but I think that’s going to need to go on the back, this would allow you to fit whatever PCIe card you need

If you have the rack, can you please share the exact hole locations and I will include that in my design, a quick search didn’t return any useful dimensions…

1 Like

Thanks! So you pretty much have two options except USB-C, as long as you don’t mind giving up the on-board storage, however if you go for un-raid then you’ll be USB boot anyway:

https://www.ebay.co.uk/itm/175265374080

You’ll need to consider that there will be extra power required especially for spinning rust, so will require an independent power supply, a 1U PSU will obviously fit, but once I’ve got the main part designed then I’ll do a bit more research in terms of what PSU options there are, to produce the 12&5V lines needed…

the charge limit in the bios is supposed to deal with that issue

Thanks Peter, that’s good to know, still some challenges with location though as the wire is very short and I believe the connector is not the strongest with a few reports of bent pins

So looks like this wont really fit in to 1U.

You could essentially disk shelf a second 1U but putting this at 4 wide would in theory be possible, but you would need to use threaded rod to keep it rigid, you would need to print each bay separately I think to get it on the 200x200 print bed.

Yeah 1U isn’t a requirement for me, and honestly I’d be okay running drives in an external enclosure. Thanks for suggesting the things you did earlier. Food for thought for the future :slight_smile:

No problem, I’ll probably do a design for it once the rest is done

The product page doesn’t mention the depth back exactly.

The PDF for the Installation Guide also doesn’t include where the exact space between or distance from from the front is. Going to buy some calipers tomorrow.


Now that they are going to lunch a 2.5GbE networking expansion card, I’m very excited. I’ve ordered one of the 12gen DIY kits and I’m in the first batch in July.

1 Like

@Mark_Tomlin thanks, let me know and I’ll include. Hadn’t seen the network adapter announcement!

I realize that I’m bad at measuring things so I’ve asked over in the UniFi forums. They can also provide a thread size that I’m unable to provide myself.

@Mark_Tomlin no problem, work has been on hold while awaiting the motherboard, it’s just turned up, so have ordered ram etc and my 3D printer is currently in pieces, so need to get that up and working before I pick this project up again

1 Like

So I’ve been thinking here about a way to use these motherboards in a new way, servers and yes that is a new way you see most people have thought of using a single motherboard as a home server but what i’m proposing here would be using the motherboards as blade nodes.

That means you can have a 1U-4U chassis that can take a new type of custom node mounts where you put the motherboard and all of it’s components inside it and then you just slot them in one by one, deeper chassis could house several of these you could even add battery backup to each node by just having a battery inside along with the board all you need really is a mount that can then take thunderbolt ports to the back of the chassis where you can slot in another type of mount where you can put pcie devices think of 10gbps networking or sas controllers via thunderbolt by simply slotting things in and some cases would be designed with either more nodes in mind or a mix of nodes and hard drives plus the thunderbolt/usb expansion at the back.

I can see this being a very useful solution also if someday framework had AMD cpus that can take ECC ram this would be an even better/more serious option to enterprise customers.

Financial viability someone would have to do the math,but I do think it’s possible since much of it can be done cheaply these days but it would allow these boards to be used in the tens of thousands by datacentres homes and offices out there and would be a genuine solution to get many second hand boards and give them a long life.

There are many companies out there with budget servers still offering v2 xeons the savings in power plus an actual step up in performance you may think that most companies out there are running the latest and most efficient hardware but being in the business for years I know for a fact it’s not the case specially for the likes of budget offerings, If anyone would like to discuss this just hit me up we can engineer something together and put these boards to work :slight_smile:

1 Like

this seems like it would be the sort of thing framework would have to work on themselves, rather then a community project, just because of the scale, but as a concept, it sounds awesome! a rapidly deployable, power efficient, modular server system would probably be pretty useful!

Not quite what I had in mind but close enough :slight_smile: Also thunderbolt is extremely important for pcie devices sas controllers etc you can also use the internal pci-e of course but the flexibility of thunderbolt and hot-swap makes it more server-like.