Strange SSD speed measurements

FW 16 w/ 64G RAM (2x32G) and 1T SSD ( WD_BLACK SN770), running void Linux.

According to its spec, the SSD should be able to run at about 5.1 GB/s continuous reads.

However, if I run hdparm -t on a very lightly loaded system (i.e. nothing is running, just a single root login in a text console, no X) I get nowhere near that. The disk was trim-ed and the charger is connected.

Even more interesting is that the speed varies depending on which partition I run hdparm. The fastest partition reads at about 2.9 GB/s and the slowest at about 2.1 GB/s. Repeated runs of hdparm on the same partition give consistent results.

So I get around 40% to 60% of what I would expect, and, actually how much depends on what part of the FLASH is being read, which is kind of spooky.

Anyone could shed some light on it?

The spec listed by manufacturers are usually just for marketing purposes and never reflect real world performance.

The speed your SSD can run at is also largely impact by the total amount of free space leftover (more free space means faster reads / writes).

You can try to validate your results with the results from tests performed by reviewers such as Tom’s Hardware. Here’s a link to their review: 2TB Performance Results - WD Black SN770 SSD Review: A Wolf in Sheep's Clothing (Updated) - Page 2 | Tom's Hardware

Well, the disk is fairly empty, roughly 25% used.

Interestingly, the specs say 740K IOPS for random reads, which with 4 KB blocks works out to be roughly 3 GB/s, i.e. roughly what I measure as best case, on a partition that has only 8% space usage.

I remember hearing someone on this forum testing with dd, and getting artificially low results because they were bottlenecked by only running dd in single-thread mode.

Maybe a similar thing is happening to hdparm? I’ve never really used that tool so I wouldn’t be able to say for sure.

1 Like

I have the 1 TB SN770 as well.

As I understand it, hdparm -t does a single-threaded synchronous sequential read. For more comprehensive testing, you can try fio. Something like:

sudo fio --readonly --filename=/dev/nvme0n1 --direct=1 --rw=read --bs=1M --ioengine=io_uring --iodepth=1 --runtime=10 --numjobs=1  --group_reporting --name=test --eta-newline=1

This example does a single-threaded sequential asynchronous read with a queue depth of 1–so probably about the same performance as hdparm. For me, those both give ~2700-2800 MB/sec.

Increasing either the queue depth or the number of jobs to 4 gives me ~3580-3600 MB/sec. Beyond that, there’s not much of a difference.

SSD performance is not critical to me, but I am curious why reviews show sequential reads around 5000 MB/sec.

It could be due to a different hardware platform, different OS, and/or different tools, I guess.

-Corey

1 Like

I agree. “fio” is a better test tool than “hdparm”.

hdparm seems to have a speed limit at about 2500MB/s.

Are you running plugged into the wall or on battery? FW laptops default to gen 3 speeds when on battery

2 Likes

You beat me to it. I just ran into this last night and was coming here to comment. PCI-e gen 3 limits the SSD quite a bit and plugging the laptop in immediately sped it up to the nearly 7GB/s I was expecting. This is a bios setting if it’s causing a problem for someone.

Wow, ok, that is very interesting. I was testing while plugged in, and here’s what’s happening. If I do the following:

  1. Unplug laptop.
  2. Suspend laptop (systemctl suspend).
  3. Plug in laptop.
  4. Resume laptop.

…then the speeds are limited as I reported earlier.

Run status group 0 (all jobs):
   READ: bw=3415MiB/s (3580MB/s), 3415MiB/s-3415MiB/s (3580MB/s-3580MB/s), io=33.4GiB (35.8GB), run=10002-10002msec

If I then:

  1. Unplug laptop.
  2. Plug in laptop.

…then I get much better speeds.

Run status group 0 (all jobs):
   READ: bw=4983MiB/s (5225MB/s), 4983MiB/s-4983MiB/s (5225MB/s-5225MB/s), io=48.7GiB (52.3GB), run=10001-10001msec

It would seem that whatever mechanism does the speed limiting only kicks in if the plug/unplug event happens while the laptop is not suspended. I can trick it into running full speed while unplugged if I unplug while suspended.

This is all via:

sudo fio --readonly --filename=/dev/nvme0n1 --direct=1 --rw=read --bs=1M --ioengine=io_uring --iodepth=1 --runtime=10 --numjobs=4  --group_reporting --name=test --eta-newline=1

Note that I’m not the OP in this thread, but that at least explains what I am seeing.

-Corey

2 Likes

I get the same, with 4 jobs an larger queue I get 4.5 ~ 5 GB/s, so it is all good.
Thanks for the tip about fio, I haven’t used it before.

3 Likes