Mainboard as Internet Router + Wireless AP + NAS Project

New card option, based on MT7915 chipset. I think it may be too long to fit in the FW13.5 chassis, but for a standalone mainboard in a custom case, the length should not matter.

I have not tried it myself, so your mileage may vary.

There are also older and cheaper options as well at this site.

And here’s the first one I’ve stumbled across with B/M key M.2 form factor:

Anyone know if a USB to M.2 B/M key adapter would run this?

Glad I found this thread. I’m working on a similar project. I ordered an Intel 11Gen motherboard that I plan to use primarily for a firewall and SMB storage share. I already have a dedicated NAS so this is a backup for the backup.

Hardware:
Not sure if it’s going to work but I’m getting an M.2 to PCIe adapter to connect a 4 port multigigabit Ethernet card.
USB-C to NVMe M.2(512GB SSD) adapter to boot from.
2x USB-C to SATA adapter for HDDs

Software:
I’m being lazy so I plan to trick Win10 Pro into booting from the 512GB SSD, create a pfSense Hyper-V VM and dedicate the 4 multigigabit ports to pfSense.

Why?:
I want to retire my aging Dell 9020 with an i7 4770K that I’ve been using for 5+ years. It has the same software setup that I plan to use and performs very well reaching 900Mbit up/down with very low latency basically getting the most out of my FiOS fiber line but this old machine is very power inefficient. So that’s the main reason I want to change things up.

Expense:
My only expenses are the framework motherboard and the M.2 to PCIe adapter everything else I already have laying around doing nothing. I’m not sure about the enclosure for it yet. I may work with a friend to 3D print something for it.

Concerns:
Not sure if the M.2 to PCIe adapter will work but I really want it to work because it should give me lower latency compared to using USB-C to Ethernet adapters.

If you made it this far, thank you.

Interesting thought. I had considered going m.2 to pcie for a 4 or 5 slot SATA controller, but hadn’t thought about putting a beefy NIC there. I don’t really need the performance, though. The CableMatters 1Gig USB NIC/switch I mentioned above has been working well since I turned off features that were making it choke. There are also Thunderbolt to PCI options too, but I have no idea about performance and Windows compatibility.

Let us know how your hardware plan works out for you for sure.

1 Like


Got all the parts tomorrow or sometime this week I’ll be testing.

4 Likes

@D.H

This is what I got going on:

2 Likes

Perhaps an LTE modem would be a nice addition as a secondary backup link? :wink:

2 Likes

Hey this is cool, thanks a ton for the detail! I want to do something like this but with my old 11th gen intel mainboard from batch 5 after I upgrade my mainboard to 13th. But I also want to do a nextcloud backup and a CI server on the same box (plus maybe later a matrix server, running conduit to postgres server). Do you recommend trying to augment a coolermaster framework build into a router? I’m not super experienced with networking (I actually still get confused about the router/modem distinction if I don’t look it up, and I don’t know to make sure particular hardware won’t be veto’d by an ISP)

I’ll be receiving my amd framework board in a few weeks, and will be using it for a home server, upgrading from a raspberry pi 4 cluster and multipass vms on a m1 mac mini that works fine but is so slow. Will be running about 40tb of btrfs usb drives attached to it, with samba, syncthing, plex, gocryptfs, dns, etc. Also planning to use it for self-hosting web services, such as discourse and other projects.

Depends on whether you want to mount external higher dBi antennas or not. If sticking with antennas integrated into or attached to WiFi dongles I imagine that cooler master case would probably be fine.

If your ISP complains about you making your own router, get a better ISP! lol. Obviously, if you’re creating a problem with an open mail relay, or letting upnp traffic in and out too much, etc then yeah sure that’s on you to fix. But as long as it isn’t creating a problem upstream they really couldn’t / wouldn’t / shouldn’t make you use a particular brand of anything… at least I hope not lol.

Arch Wiki has a lot of good information about using systemd-networkd to set things up in a very static and stable way that comes up the same way every boot. Assuming no USB devices fail to come up at boot, which has been the biggest hitch in my setup really, USB devices not appearing or going away after being up a long time. No idea if NixOS uses systemd or not though…

1 Like

Oh and a good firewall too. There’s lots and lots and lots of bad actors and bot nets knocking at my door all day every day. If you’ve got a public ip (not behind NAT), this is definitely something to consider.

1 Like

definitely-- nixos has a bunch in common with arch, you’d just never do systemd imperatively and it generates all the systemd stuff from a centralized spec file (to a more aggressive notion of “centralized” than other linux flavors).

The LTE modem expansion card looks immensely promising for me! Mainboard as Internet Router + Wireless AP + NAS Project - #55 by Jacob_Eva_LES that’ll come later

Hello, I am considering a project such as this.

I currently have a fairly large unRAID NAS. This is composed of the unRAID host, which is an unremarkable ATX pc case, and 2x 4U cases for the drives (up to 15 drives in each).

The drives in each 4U case are connected to 4 ports on an internal SAS expander, which then connects via two external mini-SAS cables to the unRAID box. The unRAID box has a single HBA with 4 external ports (9206-16e).

I have upgraded my framework laptop to AMD, and have the 12th gen mainboard in the CoolerMaster case. To connect my drives, I am considering going Thunderbolt → NVME → pci-e by buying 2x thunderbolt NVME adapters, and 2x ADT-Link NVME to pci-e adapters (such as M.2 NVMe to PCIe x4 Extension Cable, there are lots of variants). I can then place one of these inside each of my 4U cases, and put a 2-port SAS HBA card (e.g. 9207-8i) in each ADT-Link pci-e slot to connect to the drives via the SAS expander.

I am not looking for ultimate speed, just ease of connectivity, and ultra low power usage. I see reviews for some Thunderbolt NVME adapters to be quite low, e.g. 1 to 1.5 Gbit/sec, but others approach 2.5, which would be sufficient (across 15 drives this would be 166 MB/sec during unRAID parity check).

Interested to see if anyone has thought of doing similar, or anyone who has experience of speeds using Thunderbolt - > NVME → pci-e or similar.

Hey welcome to the thread.

I have never had an unRAID box so I cannot guarantee a perfectly accurate comparison. However,I think you’ll find making your own to be quite flexible, and therefore rewarding to get what you want, not what unRAID feels like selling you.

My 11th Gen is way overkill for home router, storage server, and even a dozen docker containers doing various things. Only time I see high CPU usage is when I’m running upgrades, or Home Assistant has a Python thread go wonky. 12th Gen will be even more overkill lol. Want to trade so I can upgrade my laptop? :slight_smile:

I don’t know of any advantage to going TB → NVMe → PCI instead of directly from TB → PCI . Is there any particular reason you need to / through NVMe?

I think you’re right about not getting absolutely maximum connectivity / storage speeds out of that setup, but for general home use I find it plenty fast. TB/USB isn’t quite on the same level for stability as a direct physical PCI connection, so consider very robust / Copy on Write file systems such as btrfs or ZFS.

What are you planning on for OS / Software?

TB → NVMe → PCI because products that do Thunderbolt to PCI directly seem to mainly be either eGPU enclosures that have less than ideal power requirements (like, powered by ATX 24 pin), are more expensive, or too large to nicely hide away inside my 4U cases.

Unraid has not sold me anything, other than a license for the OS. Forgive me for not having read every post in this thread - what are you using for the NAS part of your build?

Okay I was thinking more of this TB to PCI adapter, which would work for more than just eGPUs, AFAIK.

ADT-LINK R43SG-TB3 PCIe 3.0 x16 PCI-e x16 to TB3 Extension Cable Laptop External Graphics Video Card Docking Station PCI-Express 16x Cables eGPU Adapter (25CM, R43SG-TB3) (25CM, R43SG-TB3) https://a.co/d/ft7UQUT

No idea how it would fit in your 4U though.

“NAS” function requirements at my home are pretty basic and so I went cheap for now in my setup. I have a single USB3.2 Gen 1 + UASP to 2x SATAIII toaster style drive cradle (with external 12V power adapter) that holds a pair of 3.5 inch drives. Btrfs in RAID1 + nightly snapshot and on-site and off-site mirror via btrbk over ssh. SSHFS for most of my devices on the LAN to get in, one special snowflake needs SMB though (puke).

Going from tb to pcie you’ll allways need to supply the 12v rail separately, though there are more and less convenient ways to achieve that. The TH3P4G3 actually requires a full 24 pin which kinda sucks but it does allow dasy-chaining. I recently impulse bough this one which can be powered off a barrel jack, like you get with external harddrives (or a pcie 8/6 pin or molex or usb-pd, it’s really versatile in that reguard) and works well for my “poor”-mans tb-10gbit ethernet adabter but it does not dasychain.

I have that one and it’s by far the bulkiest one I got and is kinda ghetto, there are better options available so I’d avoid it.

1 Like

The TZT Graphics Card Dock looks great, powering via molex is exactly what I need. The plastic plate underneath might raise it up too much to allow me to secure the pci card inside the 4U case but perhaps it is removable. I too will be impulse buying this.

It is very removable

my main gripe with it is that they didn’t break out the other port on the chipset so you could dasychain other than that it’s pretty neat for stuff that aren’t gpus. For an actual egpu the TH3P4G3 is still a lot better and provides the laptop with power too even if it is just 65w.

If the tzt was able to pull 12v from the framework and have the tb connection over the wire it would be the absolute winner. I was able to run an 10gbit card (inte x540) powering it from the framework directly over 2 wires but the pcie ssd apparently used too much power for that to work.

Could you please expand on that? What 2 wires do you mean, exactly?