USB4 and Thunderbolt on AMD

The linked thread is about the AMD GPU driver not doing things correctly. Would not affect NVMes.
But I have no confirmed the following:
It attaching as USB3 device on my Maple Ridge TB4 host seems to be the hosts fault, as expected. When I disable ASPM on that mainboard again (as is the factory config), it works reliably with PCIe. It seems, not only causes this mainboard bluescreens if you activate ASPM for an Nvidia GPU it also messes with a bunch of devices behind TB/USB4. This Asus mainboard is just completely broken and they should be sued for how they disable and block power saving options and try to hide it, as that leads to default settings that actively break the law.

Secondly, the ASMedia app shows the ASM2464 consistently in TB3 mode when it only comes up with 2 lanes. This is also true behind my USB4 hub and for the USB4 host and the TB4 host.
Why it recognizes even a TB4 hub with the Windows USB4 drivers as TB3 might have sth. to do with the firmware on that hub, not sure. But either way, the forced limitation to x2 still seems problematic in and of itself (and I am guessing that is done by the ASM2464’s own firmware and might not have been true for older firmwares).

The problem there is that something takes the fake bandwidth the USB4 root reports literally, which is potentially the case here too.

There are a bunch of firmwares floating around you could try if you wanted to confirm that.

I have not noticed any difference between them but I only checked performance on win11 and if it would hotplug on linux.

I already tried a few.
That also changed the name of the device and Satechi themselves does not publish any, even the original 10.05. firmware it shipped with (that connected only with USB2 to my TB3 host).

The december firmwares (12.04. & 12.18.) fixed that. Currently on the 01.29. in the hopes that that would be the TB4 certified one, but unsure.
Questionable that Jeyi even depublished their firmwares for the ASM2464…

That could explain problems in linux, but not how my Windows hosts are affected.
With a native USB4 connection and x4 working, I still see the virtual x1 gen 1 connections in between. The PCIe topology with TB3 is identical. So no reason that the NVMe driver or sth. else would change its behavior, because it will only see PCIe and have no idea if that device is behind any TB/USB4 connection let alone what type of actual wire speed is used on that connection.


With TB4 hub in between, ASMtool shows TB3 mode, x2 lanes on actual SSD, x2 bandwidths

Without TB4 hub in between, ASMtool shows native USB4 mode, x4 lanes on actual SSD, x4 bandwidths.

On windows everything works fine on my end so I guess we don’t have the exact same issue.

Did manage to try with a pcie4 ssd (PM9A1) and that did 3.8GB/s read and 3.7GB/s write which afaik is pretty close to the theoretical limits of USB4. Unfortunately just on windows.

That isn’t an issue on windows but it would explain the dmesg entries about speed limitations.

Update: Holy moly hotpluging works with the other ssd.

Update2: Tried a bunch of other nvme ssds, all worked it’s just the 970 evo that doesn’t. The 2GB/s cap is still there though.

Very curios as 970 evo 500GB is the SSD I am using, because all my other (and Gen 4) SSDs are encrypted and inside my FW or deep in my desktop. I could get WD SN700 500G SSDs more easily, but those are also just Gen 3.

There is another thread with someone having the issue with a 970 evo plus, and iirc some pople had issues with them in realtec enclosures. That thing may just be cursed.

However my problems are apparently different from yours, I never had anything to do with 2x.

Have you found what is limiting your bandwidth instead? Not getting over 2 GiB/s seems suspiciously like a limitation to either 20 GBit/s or PCIe x2 Gen 3 or x4 Gen 2.

This line in dmesg looks pretty suspicious. But I have not seen anything about 2x in lspci or boltctl or dmesg or anywhere else I looked.

[ 2043.958225] pci 0000:64:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x1 link at 0000:00:04.1 (capable of 63.012 Gb/s with 16.0 GT/s PCIe x4 link)

It does kinda look like an artificial limit since I get 2.00GB/s on a dd read, 2 on the dot which is more than a pcie3x2 link has even before you factor in overhead and way less than a pcie4x2 has.

Update: It gets even weirder, I just upgraded to kernel 6.8.2 and it just read the whole drive (512GB PM9A1) at 2.4GB/s, so I guess it pretty much definitely not pcie3x2 in my case.