Luks home encryption hardware acceleration?

I am getting only arround 1135mb/s read and 2500 mb/s write on i7-1165G7 with the default luks encryption in fedora installer. Does this seem reasonable? I am using the sn850 ssd so the performance cap is the cpu which is maxed on all cores during read test.

I had expected far lower overhead with modern hardware acceleration but I could be wrong.

Interested in any optimizations/tweaks other linux users may have made.

2 Likes

I am curious, how are you doing the benchmarks? For example, here’s my i7-1185G7 with a SN850 using hdparm for read benchmarks. And yes, I have LUKS setup for the entire disk.

$ sudo hdparm -t /dev/nvme0n1
/dev/nvme0n1:
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 9520 MB in  3.00 seconds = 3173.13 MB/sec

Also if I use the GNOME Disk Utility to do read benchmarks with default settings, it reports read speeds of over 6.7GB/s.

@Chris2 you’re doing a benchmark against the bare disk, without LUKS doing anything. What you want to do is run the benchmark against the LUKS device. Use lsblk -p to figure out which is which.

Here’s my system:

root@joogaa /h/peter# hdparm -t /dev/mapper/luks-fedora-root
/dev/mapper/luks-fedora-root:
 Timing buffered disk reads: 2204 MB in  3.00 seconds = 734.33 MB/sec

root@joogaa /h/peter# hdparm -t /dev/nvme0n1
/dev/nvme0n1:
 Timing buffered disk reads: 3044 MB in  3.00 seconds = 1014.35 MB/sec

Don’t panic, it’s a much older cpu (i5 7300U) with a slower drive (Intel 660p 1TB) so the low-ish speeds are to be expected. But yes, I too see a somewhat large impact by LUKS even though the CPU does AES-NI and LUKS is setup to use AES. I remember seeing less impact, but the last time I looked at it might have been in the SATA era years ago, when I still had a machine that didn’t support it.

So I ran another benchmark today. I happen to have the same model of drive in my desktop. It has a Ryzen 3600 but I suspect the main difference is the heatsink on the drive.

root@doosje /h/peter# hdparm -t /dev/mapper/luks-fedora-root
/dev/mapper/luks-fedora-root:
 Timing buffered disk reads: 2898 MB in  3.00 seconds = 965.33 MB/sec

root@doosje /h/peter# hdparm -t /dev/nvme0n1
/dev/nvme0n1:
 Timing buffered disk reads: 3554 MB in  3.00 seconds = 1184.13 MB/sec

Higher speeds overall, but similar penalty.

2 Likes

This is why I’m looking for a drive that does Self Encryption. There are some disadvantages, but there should (theoretically) be no performance penalty.

I believe LUKS uses AES by default. Have you enabled the aesni-intel module?

1 Like

Try enabling “advanced format” or 4KiB blocks. You’ll have to:

  1. Use the nvme format command (in the nvme-cli package, usually) to switch from 512 byte blocks blocks to 4KiB blocks.
  2. Make sure to set --sector-size to 4096 when formatting with cryptsetup. I believe this is the default when the underlying drive is using 4KiB sectors, but it can’t hurt to explicitly specify it.
  3. Make sure to configure your filesystem to also use 4KiB sectors.

That will erase your drive, so this isn’t an easy thing to switch.

SN850 with 4KiB sectors, an i7-1185G7, with the default cryptsetup parameters (and 4KiB sectors):

$ sudo hdparm -t /dev/mapper/root 

/dev/mapper/root:
 Timing buffered disk reads: 4314 MB in  3.00 seconds = 1437.35 MB/sec

$ sudo hdparm -t /dev/nvme0n1

/dev/nvme0n1:
 Timing buffered disk reads: 4778 MB in  3.00 seconds = 1592.54 MB/sec
4 Likes

What distro are you using? I recently learned that Fedora uses btrfs as the default filesystem and enables zstd:1 compression by default. This could be contributing to the decreased speeds that you and/or others here have seen. In addition to LUKS, there’s the additional overhead of transparent encryption/decryption.

When I run cryptsetup benchmark I get 4600+ MBps read and write using memory as the storage IO (on an 1165G7).

Just to add some more data points for those interested.

On a Framework laptop with:

Benchmark for the raw partition:

$ sudo hdparm -t /dev/nvme0n1p2
/dev/nvme0n1p2:
 Timing buffered disk reads: 5702 MB in  3.00 seconds = 1900.17 MB/sec

Benchmark for the LUKS2/BTRFS partition:

$ sudo hdparm -t /dev/mapper/cryptroot /dev/mapper/cryptroot:
 Timing buffered disk reads: 5012 MB in  3.00 seconds = 1670.40 MB/sec

I haven’t tried adding no_read_workqueue nor no_write_workqueue to my LUKS2 partition, but there is a great CloudFlare article on it if anyone is interested. Their work was implemented in the mainline kernel as of 5.9.

https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-crypt.html

1 Like

TL;DR: From my own investigations, the problem isn’t LUKS, it’s btrfs.

I ended up distro-hopping and used it as an excuse to try and sort out why my ssd speeds were so low

  1. I used the nvme cli to set 4kb sectors and set up my SSD with 2 partitions - boot, and a LVM-ON-LUKS data partition. For the encryption scheme, I’m using 512 bit aes-xts-plain64
    Inside the LVM volume, I set up root / to be an ext4 partition and /home to be a btrfs partition (no compression, duplication of metadata e.g. the defaults)

The ext4 partition performed much much better than the btrfs partition. As a reminder, both of these partitions are within the same LUKS container, and are both encrypted:

ext4 w/ LUKS

btrfs w/ LUKS

Addendum: queues

I tried playing around with disabling the read/write queues per this thread. I did see a ~33% increase in hdparm read speeds, but a slight decrease in KDiskMark speeds:

ext4 w/LUKS & disabled encryption queues

btrfs w/LUKS & disabled encryption queues

1 Like

@Anil_Kulkarni Can you please comment why the Read speed SEQ1M Q8T1 decreasing when the encryption is disabled? Also please comment what Partition/Volume Manager and File Systems are you using.

Ah sorry the link got lost from the earlier post. All the benchmarks are with an encrypted LUKS partition. The only difference is tuning the dm-crypt workqueues as per the link earlier in the thread. Disabling the work-queues had the oddly-shaped behavior so I didn’t pursue it further.

It seems Btrfs has improved a lot in recent kernels. I saw the sequential read speed on my Framework Laptop increased to about 5000MB/s on an SK Hynix P41 and Fedora 37 given encryption enabled. Without encryption, on my desktop with an SK Hynix P31 and Kernel 6.1.7, the sequential read and write speeds were both around 3400MB/s, close to the limit of that drive.