The Benchmark Thread - Storage Expansion Cards

Continuing @CheeseWizard’s threads regarding benchmarking different components of the Framework laptop, this is a thread concerning benchmarks of the storage expansion cards.

I got a 250GB expansion card for Christmas and tested it out in Linux, formatted as it was from the factory - the EFI partition and the exFAT partition.

You can see the readings as it hits the EFI partition and transitions out into the main partition. These results are exactly as promised, write speeds far higher than promised actually:

Comparing this to a PCIe 3.0 x4 NVMe SSD in a USB 3.2 Gen 2 case:

Actually performs worse! Although the Framework’s write speeds seem overly optimistic.

I proceeded to install Ventoy to it and move over my Windows install on a .VHD:

Then I started seeing thermal throttling - copy speeds were first >400 MB/s, then dropped to 200 MB/s. Not too bad, but I could feel the USB port part getting hot.

Ran Windows off it - worked fine at first but the flash was hot and got hotter. It got very slow.

I was sort of expecting this and installed a thermal pad as Nirav describes:

It was very easy - for future reference, you need a pad 12.5mm X 10mm. I had some M.2-sized pads and had to cut a corner out of them. As Nirav states, you should only cover the top half of the flash - look at the case, this is the only part that contacts the aluminum shell, there’s plastic beneath this area and contacting the plastic will make the circuit board difficult or impossible to reinstall.

After this easy modification, I did notice that the aluminum heated up so it was working. I benchmarked it in Windows before and after using CrystalDiskMark. Unfortunately I lost the “before” chart because I had to rewrite the Windows install, but now here’s the “after”:

This is good but I’m beginning to think that running Ventoy then a .VHD VM file isn’t ideal and has a large overhead, slowing things down. It’s good enough for playing around, writing BIOS and SSD updates, but for full-time use I should probably go with a more direct install.

I’m very pleased with the expansion card, it opens up a lot of possibilities and is quite fast. The thermal transfer pad modification works, you can feel the heat transferred to the aluminum for sure.

9 Likes

Does anyone know how to properly benchmark the storage expansion cards using FIO?

I’m currently using FIO with this job file to benchmark my 1TB expansion card, however, the sequential read speed doesn’t match/exceed the claimed speed on the product page:

Sequential read: 878MiB/s
Sequential write: 805MiB/s
Random read(4K): 52.5K IOPS
Random write(4K): 72.4K IOPS

Full FIO output

I wanted to figure out if the expansion cards would be useful for alternate OSs. Although I could do FreeBSD kernel development using ZFS boot environments on the NVME that I installed I decided to try the 1T card. What I found was somewhat surprising.

My benchmark is “buildworld” which builds the entire operating system and all of its tools, including compilers, editor, etc. Unlike Linux, FreeBSD is a whole OS, and to get just a kernel we have a shorter build (buildkernel).

On the internal NVME the buildworld time was:


World build completed on Tue Sep 30 22:18:25 EDT 2025
World built in 5772 seconds, ncpu: 12, make -j24

On the 1T Expansion Card I got

World build completed on Thu Oct 2 19:41:43 EDT 2025
World built in 5914 seconds, ncpu: 12, make -j24

For just the kernel:

Internal NVME:

Kernel(s) GENERIC built in 342 seconds, ncpu: 12, make -j24

Expansion Card:

Kernel(s) GENERIC built in 352 seconds, ncpu: 12, make -j24

Note that the NVME is nda0: <WD_BLACK SN770M 1TB 731100WD 251211800642>

These numbers surprise me a bit as I’d expect the internal NVME to be faster.

The system is a new Framework 12 with 48G of RAM (I never run out during the build) and the i5 processor.

The files where probably cached in RAM. After the files are read the first time, the OS keeps a copy of them in RAM. It even performs write operations on the copy, syncing the files back to the drive in background from time to time. As long as you have enough free RAM to hold the files, you are not hold back by a slow drive after the first read of each file. Your compile would even finish before the files are physically written back to the drive.


Also if you are able to saturate the CPU with 24 parallel threads, you are bound by CPU performance, not the Drive speed. Then the effect of the drive is small in comparison.

The files are all read once, turned from .c into .o (therefore written) then the .o’s are read and linked into executables, libraries and the like. The source files are all read once, the objects all written once. I could take a look at the buffer cache or the ZFS stats to see the hit rates.

But, the more salient point, is that for a build heavy workflow the expansion card is a nice option for carrying around multiple systems. I’ll probably get another one for a Linux distro. The 250G looks cheapish and I won’t store that much on the test modules.