Rackmount Modular Framework Server

I thought I should post a brief introduction to the very early stage of my project. I don’t have a main board yet so a lot of this is theory including the design so far as I haven’t been able to take accurate measurements.


I’m part of a large home lab community and for those that do not know what a home lab is, it’s a place to learn and put in to practice IT skills, from starting small using Raspberry Pis, NUCs and Micro Servers to inevitably ending up with a rack to place your servers, switches and routers. As a community we like to tinker, change things around play with different use cases.

As soon as the news came out that the framework main boards were going to be available then the community became excited with the potential idea of using these as servers, due to the size, flexibility and upgrade-ability.

The Project:

To design a 3D printable rack mount for the framework main board. For this project I have imposed the below criteria:

  • Must be 3D printable on a 200x200 bed (most common size)
  • Must be modular to allow users to piece together a server for their needs and change in the future as those needs change
  • Must allow users to create their own modules and share with the community
  • Must work with the existing expansion cards, plus future deeper cards (I’m looking at you Ethernet)
  • Must be able to fit 19" 1U rack, 10" 1U rack plus vertical “cluster/blade” style mounting for high density computing
  • Should work with off the shelf components and common standards where possible

Technical Areas of Interest:

  • Being able to power on the device without pressing the onboard button! (Plus anything else I can pull from the header - this wont be off the shelf, however may still do an optional lever based approach)
  • Best approach to getting power to non onboard devices (Fans, LEDs, SSDs, HDDs, external PCIe etc etc…)
  • Using the internal mini PCI-e and M.2 slots (E.g. Ethernet, 5x SATA maybe full PCIe breakout)
  • Usage of USB-C to SATA/M.2/PCI-e etc etc
  • Front screen potential

The Design Principle:

This is just a concept of what it will look like, it is incomplete and has been through several iterations and will have many more iterations to come as I try to fit all of my ideas in to a small chassis.

For the front 17" unit this comprises of 3 bays, these bays will be mountable in any order, the first and the third are identical in size so you could have two main boards in this form factor, the same bay can also be used individually across the 10" and cluster versions. The middle section is suitable for 2.5" drives, 1U power supplies or anything else that will fit in to a ~80mm wide 1U form factor.

The anticipation is that each bay will be 3 parts: Front (first 44mm), with middle and rear making up the rest, the advantage of this is that changing the front layout doesn’t require reprinting 50% of the bay.

The fan covers can be switched for any 40mm fan cover so you can customise to your own taste, pictured is my framework logo based fan cover.

Note: The 3D printable framework case shown in the left bay is just for reference and not part of the design.

Any questions or feature requests are welcome!


Very cool!

People on this forum may be tired of me trotting out the same old thing, but these parts are interesting:

The M.2 2230 to gigabit and 2.5 GbE Ethernet cards look really intriguing and would be essential for a server, a great way to use the high-capacity M.2 2230 communication slot with a tiny card and save USB capacity. The M.2 2280 to 5 SATA port card seemed interesting at first, but users report the PCB is very flimsy.

My TrueNAS is on aging hardware, it would be great to migrate it to much newer hardware and the Framework mainboard would not only be faster, it would consume less power. That’s the interesting part about using these laptop CPUs in a server - good performance but power savings too. It would be much more powerful than my 4th generation Xeon yet consume far less power at idle or even at peak turbo. If only you could use ECC RAM!

I’ve positioned my server in a place where I could easily use rackmount equipment with no concern about fan noise or appearance. I might be tempted to switch to rackmount…that would be cool to explore.

1 Like

I also realize I have 4-5 IKEA LACK tables just gathering dust…





I’ve read some of your posts in researching what’s around, you definitely pointed me in the direction of the M2 to SATA.

I was looking at the 1G ethernet and stumbled across the 2.5G, the conflict I have with this is that the 1G I can find in a half height PCI form factor (following a standard) the 2.5G uses a propriety layout, ideally what I need to do is have a slot in the front that allows both ideally (via an adapter) , but if that becomes too much of a challenge then I’ll need different fronts to accommodate either (the front panel is probably 75% of the challenge with this build).

ECC is definitely nice to have and a luxury I have in my 3 main servers, but realistic cost vs benefit in a home server environment is low, if you are doing some serious compute then you’ll see some benefit but file serving and streaming it’s hardly worth the extra cost, I was considering a new custom build recently, and it was at least double the cost, most of that just on the main board, I may still need to do that in addition to this project just because I’m running 10G fibre workstation to server.

If you are not already I recommend check out r/homelab on Reddit if this kind of thing floats your boat…

1 Like

Haha you must already be on r/homelab then!

I bought 2 last week for a different project, although currently designing my own version of this: IKEA Lack Enclosure Creality Ender 3 Compilation by Woody1978 - Thingiverse


Hmm - it looked like full-height PCI?

It’s hard to find proper dimensions for PCI brackets, but I see one reference to 4-5/8" (117.47mm) and one for a full-height board for 121.92mm. The included bracket is 121mm including the tab so it seems close to full-height PCI.

Or are you referring to the small board the bracket attaches to?


This is why trying these things out is interesting, you end up learning a lot.

I thought that it was essentially just the header with ears for 2.5G, but having searched again in fact the 2.5G is both! The below link is imaged as full height but comes with a half height adapter… so a good option for some (although doesn’t help me personally as ideally I need at least double that, but more if I can come up with an NVME solution)

@Fraoch I also found this as an interesting option, giving you 2 x 1GB handy for OpnSense type applications:


Hum…so kind of like a NUC cluster then.

or this


Kind of, less NUCs more frameworks

1 Like

So you’ll need to design the enclosure to hold the Framework mainboard, to make that into a blade. Then design the enclosure to hold the blades. I guess the advantage with using the Framework mainboard is the intended slim form factor…if you can benefit from high density cluster that is.

If you just use the existing bottom cover + input cover combo as your enclosure, you could fit 20+ units in the same space, I would think. (Not stack them literally, as you’ll have to deal with airflow)

Don’t see much of other benefit as NUCs are so competitively priced, and have 2.5Gbps eth OOTB.

@A_A it won’t be blade connectivity, it’s not really an architecture that utilises the benefits of blade servers, although some benefits there, however there is no way you are getting 20 in there, airflow would be far from sufficient.

The number of units is defined by a 1U height, as the point of this project is flexibility not density (you don’t have to mount vertically it’s just a possible option), potentially you could get 10 wide on that basis which I’ll try to achieve but need to ensure structural integrity is maintained.

If you want a NUC solution then I expect you are on the wrong forum, the point in framework is that when you upgrade your laptop you can repurpose your main board for something else, a home server would be an ideal use case for those with the technical ability.


How many Framework mainboards do you plan to have around to fill the 4U chassis…to make this project worth the effort? Or are you only looking to fill the 1U chassis?

I was looking at the 19" 4U…that could fit 20, potentially. Even with them at full load, they’ll only be drawing [peak] 90W (per unit) x 20), that’s just 1800W. per 19" 4U…that’s within capable cooling capacity of most typical 4U servers.

My build will only be 1U, I don’t want to limit others though hence the flexible options in my design, it would be very simple to build something for myself, building something for the community is far more complicated

1 Like

I see. I thought all three types were on the to-do list.

You need to re-read why primarily I have a height limitation, however 90W? Your talking about power consumption not TDP which is what you actually need to look at?

But more importantly are you planning on just stacking fans on top of each other ignoring airflow entirely, interested to see your design.

They are, the design allows for all 3 they utilise the same unit, not 3 different units, the only thing that changes are the rack ears

That’s the thing, TDP will always be less than consumption. The peak 90W per unit forms the upper bound / worst case. When you look at a server, the PSU, drives, fans, ICs…they are all heat generating parts, not just the processor. If the 4U can consume / use PSUs with 1800w, it needs to cool that in worst case scenarios.

Irrespective of the number you use, 4U doesn’t have a cooling capacity that’s relevant in this case, especially in a non data centre environment. The point is airflow and avoiding thermal throttling on the CPU, having two fans working against each other is never a good idea, I’d be interested to see your design that can fit 20 units in 4U 17” space and deal with the exhaust heat, maybe I can learn something. However I’ll be happy with half that in my design which allows more expansion and accessibility per board.

1 Like

I believe theres a roundabout way of getting dual 10gbe (rj45) on a mini-pcie as well, saw it on small form factor reddit a while back. From memory it involved occulink as an intermediary interface and a HP 10gbit module.

1 Like