Is it possible to have a raid10 with he M.2 expantion bay? Not that it would be worth it but is it possible? For science.
Certainly, there’s nothing special about the expansion bay, it’s just another set of M.2 slots connected over PCIe, the same as the slots in the laptop.
How many SSDs are you going to RAID together for your RAID 10?
A coworker and I were talking about it the expansion bay, he thought it was like a raid card. That made me doubt my understanding that the expansion bay is basiy just a pcie “extender”
How many drives? In this hypothetical just the 4 internal M.2
Its not like a RAID card.
It more functionally equivalent to this, when used in a desktop PC:
https://www.amazon.co.uk/Expansion-Drive-Adapter-Signal-Splitting/dp/B09C264XNP
You’d just connect the additional SSDs as normal, there is no specific raid support. You can always configure a software raid, though.
The Ryzen 7 7040 series chipset does not support hardware/BIOS/firmware raid.
I have had a (rather disappointing) discussion on this matter (was back before the dual m.2 carrier released). Interested in a RAID 1 betwen the 2230 and 2280 for resiliency.
But since they are all directly attached anyways, you can do any form of software raid. ZFS might not be a bad start, or Storage Spaces. I have yet to test Storage Spaces vs microsoft’s dynamic disk raid.
The only concern by this point is the lack of ECC on the RAM, which is also not supported (by the Ryzen 7 series). Which make a “NAS on the go” a lot less enticing. But still technically doable.
If you ask me, I will rather do a RAID 5 on the 3 2280s, or RAIDZ1. Or “parity”.
Why doing BIOS RAID when you can do software RAID instead?
Software RAID is far easier to manage and recover (if needed) than BIOS RAID. The overhead of software vs hardware RAID is not that much for RAID0/6/10
because you don’t need to do software raid. Like you don’t need some ZFS, or Btrfs, you don’t need Dynamic Disk (or storage spaces), paranoid over the lack of ECC blowing up your array, or go like build your own bootloader for a thing, etc.
It does mean that when the RAID controller dies your RAID dies as well, but for stuff like “simple mirror” (a.k.a. RAID 1), across two disks, this is totally fine.
They aren’t really intended for bulk storage, nono. But if you have, say, 40 drive in ZFS, you would probably also have a hardware RAIDZ1 on your TrueNAS Scale boot.
You don’t need to use a filesystem for that either. Just use MD, LVM or similar, then you can apply whatever partition you like. MD is a tool for RAID management only, LVM is more flexible but higher level too.
About ECC blowing, you have the same risks when using BIOS or Software RAID, except you have better recovery tools with opensource software RAID than average hardware RAID.
but you need a OS/bootloader to be able to handle that. There’s no way Windows support LVM. I can maybe use QEMU, but by that point I am probably better off with just whatever Microsoft’s equivalent is.
Sort of. But if you have firmware RAID1, usually there is no RAM cache involved. It’s just write to two things at once. It’s slightly better.
And back to recovery, it’s RAID1. All the firmware RAID1 I’ve seen doesn’t need anything, you can just take out the drive, and put them in something else. they act like they are independent identical drives. When you put them back into the firmware RAID system, you will need to reconfigure. But you wont use a raid 1 like that.
Oh agreed, I’m talking about Linux only. Windows also have their own software RAID.
I can only talk about Linux, since DM RAID is massively used on servers to create RAID10 or some RAID5… but I bet Windows software RAID is good as well.
DM RAID is a core feature on Linux software raid, LVM and other friendlier solutions use DM RAID for low-level operations.
But I can agree with your point that software RAID is strictly OS dependent, but I would strongly encourage this option because BIOS raid are not easy to debug or recover in case of problems… and doesn’t have much performance penalty.
If you are doing RAID1 at the Framework, then anything works. I do RAID0 and works fantastic. Most of my critical data have backups, so not much to worry.
Also, a good use for software RAID on my experience: Disk upgrade (other brand, different size). By using software RAID, you can convert RAID0 or RAID1 to RAID5, then reduce RAID5 to RAID0 or RAID1. This is not a complex thing to do. With hardware RAID, you get kinda locked to same hardware same size (or larger)
modern (well it’s been like that for a long time) bios raid IS software raid in a trenchcoat with a weird driver on windows.
There is stuff like intel vroc that are slightly advanced software raid but it still uses main cpu and memory and stuff.
Somebody makes hardware raidz controllers?!
At least on semi modern amd platforms bios raid certainly can use ram caching and iirc is enabled by default.
It’s usable, nowhere near the performance or functionality of md or pretty much anything else you get on linux. Main issue with it you can’t really boot from it.
One of the things I kind of miss after moving to zfs, the flexibility of md or btrfs were on a whole other level.