According to its spec, the SSD should be able to run at about 5.1 GB/s continuous reads.
However, if I run hdparm -t on a very lightly loaded system (i.e. nothing is running, just a single root login in a text console, no X) I get nowhere near that. The disk was trim-ed and the charger is connected.
Even more interesting is that the speed varies depending on which partition I run hdparm. The fastest partition reads at about 2.9 GB/s and the slowest at about 2.1 GB/s. Repeated runs of hdparm on the same partition give consistent results.
So I get around 40% to 60% of what I would expect, and, actually how much depends on what part of the FLASH is being read, which is kind of spooky.
Interestingly, the specs say 740K IOPS for random reads, which with 4 KB blocks works out to be roughly 3 GB/s, i.e. roughly what I measure as best case, on a partition that has only 8% space usage.
I remember hearing someone on this forum testing with dd, and getting artificially low results because they were bottlenecked by only running dd in single-thread mode.
Maybe a similar thing is happening to hdparm? I’ve never really used that tool so I wouldn’t be able to say for sure.
This example does a single-threaded sequential asynchronous read with a queue depth of 1–so probably about the same performance as hdparm. For me, those both give ~2700-2800 MB/sec.
Increasing either the queue depth or the number of jobs to 4 gives me ~3580-3600 MB/sec. Beyond that, there’s not much of a difference.
SSD performance is not critical to me, but I am curious why reviews show sequential reads around 5000 MB/sec.
It could be due to a different hardware platform, different OS, and/or different tools, I guess.
You beat me to it. I just ran into this last night and was coming here to comment. PCI-e gen 3 limits the SSD quite a bit and plugging the laptop in immediately sped it up to the nearly 7GB/s I was expecting. This is a bios setting if it’s causing a problem for someone.
Wow, ok, that is very interesting. I was testing while plugged in, and here’s what’s happening. If I do the following:
Unplug laptop.
Suspend laptop (systemctl suspend).
Plug in laptop.
Resume laptop.
…then the speeds are limited as I reported earlier.
Run status group 0 (all jobs):
READ: bw=3415MiB/s (3580MB/s), 3415MiB/s-3415MiB/s (3580MB/s-3580MB/s), io=33.4GiB (35.8GB), run=10002-10002msec
If I then:
Unplug laptop.
Plug in laptop.
…then I get much better speeds.
Run status group 0 (all jobs):
READ: bw=4983MiB/s (5225MB/s), 4983MiB/s-4983MiB/s (5225MB/s-5225MB/s), io=48.7GiB (52.3GB), run=10001-10001msec
It would seem that whatever mechanism does the speed limiting only kicks in if the plug/unplug event happens while the laptop is not suspended. I can trick it into running full speed while unplugged if I unplug while suspended.