Hey Folks, I’ve been eyeing that new AMD board for the 13" and the cooler master case, and have been thinking about utilizing my 11th gen as a virtual host to clean up my network stack (Have a stack of dell micro systems at current to perform various functions).
Obviously the software end of things will work, and I can drop one or two of the network expansions to provide more than enough networking.
How has the community handled redundancy however, in terms of the storage? NVME SSDs are pretty long wearing at this point, but I’ve it in my head that 2 is 1, 1 is none. How do ya’ll handle RAID, or lack there of?
Cheapest would probably be a second ssd in an usb enclosure.
Or an m.2 sata card and a bunch of sata ssds I guess. Lots of options.
TB ssd is definitely an option but for storage imo a bit unreasonably expensive.
If you don’t actually need the wifi that is definitely an option too, for a non portable application I’d definitely go with an adapter instead of a weirdly keyed ssd though and keep in mind there is only one lane not 4 so the ssd will run nowhere near full speed.
Well that part is a separate layer, raid isn’t backup.
Don’t have a storage expansion card (or a framework jet for that matter XD) but I never had problems with my external nvme enclosures. I had one dodgy sata one but that was probably an actual hardware fault. And well if it messes up you still got the internal one for redundancy right.
Main drawback is the bandwidth but depending on your use-case it might not be a big issue. 5Gbit enclsures are cheap, 10Gbit are pretty affordable, above that it skyrockets pretty fast.
and not necessarily even redundancy when flash is involved. Flash is write cycle limited, and if you’re running raid 1 on two flash based drives, they’ll both receive exactly the same write loading, and consequently both will exhaust their write cycles at the same time and you will have simultaneous failure of both drives.
Not directly(though I am pretty sure I remember some chinesium ssds with an a+e key but I can’t find any on the quick) but since it still has one pcie lane and all the voltages an nvme m.2 ssds needs you can definitely install one with an adapter like this.
This is making the assumption of perfect production, and wear leveling between the two. Yes, it is write cycle limited, but there isn’t a specific byte count that a drive will lock up at, it’s a given range depending on load, temperatures, use-case, when it was produced, where it was produced.
Thanks for the replies guys, given me a deal to look at and research. Wondering now, if the platform makes sense for this purpose though, reading a lot of issues with SR-IOV with 11th gen processors and Proxmox, which might kill a lot of the proposed items I wanted to virtualize Not being able to forward the iGPU to something like Emby, or Blue Iris to assist in transcoding will really put a damper on things.
Even when using identical drives (which you probably should not) the point of actual failure is not gonna be the same, the error bars on flash wear-out are pretty huge.
On the other hand spinning rust is mostly runtime limited which when using 2 identical drives one could make pretty much the same argument but the error bars there are also pretty big.
except that spinning rust is subject to a variety of random mechanical failures making it vanishingly unlikely that both will go at the same time (though admittedly I’ve had it happen). SSDs are in theory reliable enough that exceeding the write cycle limit is the main failure mode