I ordered my (batch18) framework without storage devices for a few reasons; now that my batch is up and my card is charged I am looking into getting the drives for it.
And I find myself wondering: if I populate both the primary and secondary nvme slots can they be set to a hardware/ bios-uefi Raid 1?
Thank you for your input, upon review your post seems to be about trying to find the best Raid to use and all of the named ones i saw (zfs specifically) are software raids usually by the installed OS.
My specific question is to try and find out if the uefi bios in this machine is capable of Raid (specifically raid 1)
I was thinking of getting two 2TB drives and mirroring them so if one died it wouldn’t matter… But as I want to boot my OS from this, the Raid implementation must be at a lower level than the OS…
Your topic and mine are both Raid related but that is the only similarity I see…
Hardware RAID make no sense anymore except with dedicated replacable and available controllers.
In laptop, the “raid” feature is also only a pseudo software raid feature.
When using Linux as OS, it is very easy to setup a software Raid 1/0 array.
Advantage of the software RAID is that it is portable to another computer even if this one has no raid.
But - the thing is the disks should have the same speed and size of the slower my impact overall performance of the array.
The current best option for raid is simply running btrfs, install the OS on drive 1, the change it to raid 1 or 0, and tell btrfs to handle it. Simple as that.
Here my raid setup (on server):
root@terminus:~# btrfs fi show
Label: 'storage' uuid: 9ef13874-8e75-4235-84c5-0daee2370c2e
Total devices 2 FS bytes used 7.92TiB
devid 1 size 9.09TiB used 7.98TiB path /dev/sda1
devid 2 size 9.09TiB used 7.98TiB path /dev/sdb1
root@terminus:~# btrfs filesystem usage -T /export/
Overall:
Device size: 18.19TiB
Device allocated: 15.96TiB
Device unallocated: 2.23TiB
Device missing: 0.00B
Used: 15.85TiB
Free (estimated): 1.17TiB (min: 1.17TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data Metadata System
Id Path RAID1 RAID1 RAID1 Unallocated
-- --------- ------- -------- ------- -----------
1 /dev/sda1 7.97TiB 10.00GiB 8.00MiB 1.12TiB
2 /dev/sdb1 7.97TiB 10.00GiB 8.00MiB 1.12TiB
-- --------- ------- -------- ------- -----------
Total 7.97TiB 10.00GiB 8.00MiB 2.23TiB
Used 7.92TiB 9.33GiB 1.11MiB
You certainly make a claim that is in no terms small (that hardware raid makes no sense)…
Whilst I strongly disagree in the sense that I believe everything has it’s time and place depending on goals, circumstances, and available technology… (as a tangent to the main conversation herein) I would also give anything to find out what information/situation/events or otherwise has led you to that perspective and why you believe it as such.
(As a tangent to a tangent conversation) windows is the desired OS and I do not ever see myself using linux as a daily driver; I have over the years tried linux a few times as a daily driver and my experience has been very similar to linus’s from linus tech tips and I have no desire to fight that fight on my daily laptop… I do have linux running at home (I have 3 proxmox box’s; one of which is my router which runs opnsense, a linux vm with adgaurd home, another linux vm with docker running nginx/mesh central and a few others, another linux vm with the ubiquity controller software, and yet another with home assistant)… The key to this is however that proxmox is setup as an appliance using the vanilla distro and not tweaked in any way; and all of those VM’s have checkpoints made before I change literally anything, because my experience has been most changes fail and break everything for having tried! Sure linux is powerful but it has kicked my ass hard enough, an enough number of times that I will only use it where forced to and I will checkpoint every single change I make in a VM and I will avoid a bare metal install like the plauge!
Oh. That one is simple. If your Raid runs fine for 8 years, and the controller breaks - you will be lucky to find the controller on eBay that still runs and recognizes your Raid.
Also, hardware raid barely has any speed advantage anymore if you use decent NVMe SSD’s.
The hardwar raid usually uses S-ATA or SAS which is limited in speed compared to even a PCIe 3x old NVMe SSD.
Regarding Windows - everyone has a choice to make, and I do the one linked to “real” privacy.
I thereby host my own servers/services (Firewall, DNS + RPZ, mail with automated protection, web with dynamic blacklisting, NAS, media centers, Matrix etc.) and block the usual suspects (facebook, twitter, microsoft/google/apple data-leach target servers) in both directions. Amazing how few ads and crap I get through that.
And I do that all that while learning cloud technologies (kubernetes on bare metal hardware). Ok, I have 35 years on experience with Linux since I started with linux 0.07p11 or was it 0.11p7? Don’t remember anymore - too long ago.
The counter argument here is stronger then I expected; that said as I was hoping for a purely redundant raid 1 aka 2 drives as a exact mirror of each other and I want my os on said drives meaning the OS cannot be what is doing it…
Best - use btrfs as filesystem and configure data integrity through btrfs.
Best that is available at this time IMHO. ZFS requires too much ram (last I checked, was some years ago).
That would be a great idea… except last time I looked windows can’t use btrfs without a super janky homebrew driver (I have used said driver on my dual booted steam deck… and jank is a understatement!)
The problem is windows alltogether. They have very good marketing, but technically they do everything to force others tom follow their poor badly/over -documented standards to keep the market share.
Only advice I can give you, is to use an external NAS (connected through USB-C) that will keep valuable data and perform a decent integrity check.
I have done so buiding an external NAS, 2x10TB Disks as Raid-1 (Mirroring) using btrfs configured with all bells and whistles to check the data integrity and redundancy.
My family uses Nextcloud clients on their devices (phones, tablets and computers) to sync specific work directories to the NAS and that’s all they need to to. The rest - I handle doing regular backups of the data on a different device.
Do any laptops do hardware raid 1 ?
My understanding is that when raid is mentioned in the BIOS of laptops, it is still only referring to software raid.
Only servers with actual RAID PCIe cards do actual real hardware raid.
Depending on where you draw the line on hardware raid they have not for quite a long time.
Bios raid has been mostly software (with some hardware assisted anomalies like vroc which was technologically cool but basically unusable because of licensing) for ages outside some very server specific boards.
It is software raid that isn’t visible to windows because it happens on the driver level though so it does make a difference there.
Motherboard RAID is still essentially software RAID, yet not as portable as pure software RAID.
Just run pure software RAID (md/LVM, zfs, btrfs, etc) .
I suspect you will only find matching drives for the 2280 and 2230 nvme slots to be of the slower variety (i.e. DRAM-less drives). Or to phrase it another way, I could not find a modern gen 4 2230 drive with a DRAM cache. For example, the WD black 2230/80 options offered by FW do not have the same performance characteristics.
RAID is an uptime mechanic, not a backup. If you are running a mirror, an accidental deletion will delete the file from all drives. Use a backup/versions/snapshots instead. If you are worried about bitrot, use a FS with check summing. RAID can be a performance mechanic if setup correctly but the default FW16 is not really the right setup for performance.
Do you really need RAID? Just run a good NVME gen 4 drive and save yourself headaches. It will be fast enough. In an emergency/upgrade you can easily access the data on the single drive. Use the second drive slot as extra storage or as a backup drive.
Did anybody actually answer the question? While I can’t think of a good reason to use bios raid in a laptop with solid state storage, I am curious if framework bothered to add it.
You CAN actually do software RAID1 in windows on the system disk using dynamic disks if you have the Pro version. You install windows as normal, then convert to dynamic disks, and then add a mirror. When you boot your system it will show two windows installs. In my own testing it does not add any performance benefit, so you are only really protecting against a disk failure. I don’t think it’s worth the hassle.
If windows is your only OS option and you need a real-time local redundant copy of your data, consider something will just sync/copy files to the second drive.
Hate on windows all you like, bloated and privacy invading all true…!!!
Still I will take that over having to fight for hours with my OS to make it do the simplest damn thing, then be forced to look up 12 different guides to finally get it to work mostly right except it would still be weird/ wonky… then the moment you try and tweak something to fit your needs you wind up breaking 5 unrelated other things because dependencies got changed but those other things needed the older version and the tweak you tried to make didn’t even work… if it is not clear I am describing linux here!!!
So for all windows faults and there are many; I will take those faults over fighting a fight just to use my own god dang computer!