BIOS RAID Support

Hello,

Does the Framework Desktop BIOS support RAID 0 or RAID 1?

I want to make the two PCIE drives in the machine RAID 0 but I’m also interested in RAID 1.

Thanks

1 Like

One of several videos on RAID that Wendell has done.

Don’t use hardware RAID or Motherboard Software RAID. Use ZFS or something.

2 Likes

@GhostLegion This is a four year old analysis and not performed on an AI MAX+ 395 system. We should do this on this hardware don’t you think?

Despite that, the advice holds, software raid is a whole lot better and more reliable than semi-hardware raid. At least as long as you’re not using Windows.

4 Likes

Yes but that wasn’t the original question. I don’t want to be told you shouldn’t do it as I’m not looking for advice. Rather, I am asking if it can be done.

Does someone that has a Framework Desktop comment if it is available as a hardware setting in BIOS or it isn’t?

Thanks

True enough but it’s not like you’ve gone in great detail why hardware RAID is somehow required for you or why ZFS doesn’t meet your requirements. Or why other forms of software RAID aren’t useful to you.

Just because it can be done doesn’t mean it should be done. I just linked the seemingly most appropriate video of the 3 Wendell has done. Hardware RAID doesn’t do what it needs to do to protect from data corruption. Neither does Software RAID from a motherboard. The only RAID worth talking about comes from file systems and therefore negates your question since it’s available from within the file system and doesn’t need explicit support from the motherboard.

Heck, you even said that you’d consider either RAID 0 or 1 and they have wildly different performance characteristics. So in the absence of seeing why you need this, yes, I’m going to tell you it’s a bad idea. It’s your computer and your data so you do what you want. Maybe you didn’t know hardware RAID is bad, now you do. What you do with that knowledge is your business.

@GhostLegion could you explain more about this point? Im curious

I don’t think I’ve seen any such option in the bios but have a look yourself. Geerlingguy posted a video of the entire Framework Desktop Bios with the available settings:

Framework Desktop Mainboard · Issue #80 · geerlingguy/sbc-reviews

3 Likes

@PenTurDucKenLinE

Its a fairly basic primer but its a decent starting point.

2 Likes

Cool! Thank you! I’ll check that out! :grinning_face:

I don’t have any experience of ZFS but Btrfs has been running on a NAS here for about half a decade without issue.

What the article doesn’t mention is what happens when you have data inconsistency on a degraded h/w raid array - this varies but for consumer motherboard implementations then you’re pretty much screwed in terms of rebuilding the array automatically and maintaining any data integrity.

Data pools, scrubbing, de-dupe & snapshots are basic functions of a modern filesystem - well they are if you want to retain data integrity for more than a few years anyway.

3 Likes

BTRFS and ZFS are really just pretty close to the same thing, though the latter has another decade or so of testing behind it. When I switched to Btrfs from MD raid, it was a pretty big improvement, md raid was constantly needing me to rebuild the mirrors because of its lack of journaling and the like.

And yeah, the semi-hardware semi-software raids pretty much lack the benefits of the old hardware raids, and are also motherboard specific, which makes recovery a lot harder if your motherboard fries. Once with a ZFS pool I swapped to new drives one at a time, doing replacements each time, then I took 2 out of 3 of the old drives and plopped them into another system, and all of the files were instantly readable there… with a semi-hardware raid that would have been unlikely to work.

1 Like

The other problem with hardware RAID is the likelihood of killing another drive when doing a rebuild of the array.

Usually the drives are all the same brand/model & date code so unless one fails very prematurely (bathtub graph) there’s a fairly high probability that another will fail during the rebuild process. That in turn means you have to try to copy all the data from the array before a rebuild (which will take hours or possibly days). Then of course there is no data scrubbing so there’s a high likelihood that there’s been a data integrity failure somewhere during the process.

If I sound somewhat jaded (or even bitter & twisted) then that’s because of “classic” RAID arrays I have known, maintained and loathed :wink:

Hell even Microsoft recognised the stupidity of hardware RAID back in the 00s - their “Home Server” product did a subset of what Btrfs does now for a pool of disks with some “optimisation” which on occasion wasn’t helpful (but got fixed). Windows offers more reliable raid options than trusting to a specific hardware manufacturers drivers.

IMHO if people are doing h/w RAID with NTFS/EXT4 as a filesystem then best of luck with that. Even with Btrfs its an unnecessary pain replacing disks in a h/w array.

It’s not like Strix Halo is going to be short of cpu cycles anyway…..

1 Like

Looks like a lot’s of people here are not understanding the point here. I would love to have the RAID0 in my use on HW level as the APU seems to support it at least. The point is not to get any redundancy at all with RAID0. The point is to make the disk speed double and have full utilization of the MB capabilities. It would be a huge factor when loading 120b models, or in general switching models. At the moment I have a storage pool in windows so that 500Gb from 2Tb disk is for c: and windows. rest of it is shared with other 2Tb disk as the storage. I really hate this setup and would switch it in a heart beat for full HW RAID0.

People here are talking like we would have full disk farms at our dispose. We do not. We have only two M2 slots and talking about redundancy and multiple disks here is pointless. If the APU supports RAID, I would love to see it as an option in the BIOS and would start from scratch in a heartbeat.

It supports semi-hardware raid, which is to say, almost entirely software raid that fakes being hardware raid, and that only works in Windows. Since the APU itself is doing all the work, you might as well do software raid from the beginning.

Off-topic, but … Disagree.

My 15x IronWolf disks that makes my 145TB of 3x formatted parity resilient virtual disks on the raw 182TB of 3x storage space pools, with which I I get a symmetrical and constant 800MB/s to 1.2GB/s read and write either from one array to another, or between array and M.2 NVME begs to differ… One just needs to know how to properly relate column and drive count to interleave size to LUA / cluster size to make it rock and roll like that. And the 3x arrays have been rock solid stable and as already noted fast as all hell.

I actually have no idea how well software raid works on windows since windows is only ever a guest operating system for me and never native, which is why I always caveat that. The last time I used Windows as a host OS, back in Windows 7, software raid was pretty terrible on it.