USB4 and Thunderbolt on AMD

I don’t have any tb4 stuff either XD

Well darn.
I have a whole horde of equipment waiting to be tested. But no AMD phoenix host…

I believe I have read a lot about other people using TB4 equipment. Although most will not confirm (or even know how to) whether they have actual USB4 connections with all the expected features.

But given that the fallback from USB4 to TB3 is more work, I really would expect “USB4” to work in general. And all the problems are around PD negotiation (between PD controllers on mainboard and in device) and then specific USB4 features like the 2nd DP tunnel being problematic or like I am seeing with my ASM2464 on my FW 12th gen host: behind certain other hubs it only negotiates half the PCIe lanes without reason.

If FW hadn’t ruined my hopes of getting firwmare support, I might have actually bought the AMD board as an upgrade…

1 Like

All I got is first and second gen (the one with dp1.4) intel tb3 controller based stuff and that works quite well. The lenovo dock is a bit fishy on the host port but I am 70% sure that’s the pd issue that got mostly fixed in the beta bios, I should probably retest that. Using the non power downstream tb port on the dock works just fine.

That is my thinking here too.

Well incidentally the usb4 enclosure I have ordered is ASM2464PD based so we’ll see soon.

The speed of bios updates definitely isn’t their strong suit and I really hope they can actually improve that faster than adding more devices to the portfolio.

But as long as the hardware decoding power issue isn’t fixed I would wait to upgrade from an intel platform as a linux user that likes to watch videos.

My USB4 SSD enclosure arrived, in good news the usb3 fallback works well and I get 10gbit on normal usb ports. Also good it shows up as USB4 in boltctl, in bad news it doesn’t work in USB4 mode XD.

Update: Weirdly enough it works when daisychained through my expu enclosure, but seems to be limited to 20Gbit for some reason, topping out at 1.7GB/s.

Not sure if that’s a framework issue or a me buying the cheapest one I could find on aliexpress issue though.

With my ASM2464 I am seeing that it limits itself to PCIe x2 (I only have an Gen 3 SSD in it, did not try with a Gen 4 SSD yet) if behind any TB3 or TB4 hub.
This happens both on the FW (12th gen, Windows USB4 drivers, reliably connects directly) or my TB4 Maple Ridge host (TB legacy drivers, often only recognized as USB3 device).

lspci with a few -vvv should show you the PCIe connection. Although there are virtual PCIe devices in the middle that indicate BS connection info (like PCIe x1 Gen 1), so be sure to look at the actual SSD. HWInfo or CrystalDiskInfo shows it on Windows.
While the TB/USB4 connection speed still stays 40G.

This weird PCIe lane limiting does not happen with my TB3 eGPU, so def. a ASM2464 problem. Although since the connection manager is supposed to configure those connections those might also be somehow involved in making that decision based on some misinformation the ASM provides.

Now that you mention it it does say something about downgraded link speed in dmesg.

Both my intel tb based egpu enclosures definitely don’t have that problem, they also don’t have usb fallback though.

Man on paper this thing looked pretty nice, you get to have the high speed of a tb/usb4 enclosure but can still use it all the way down to usb2. As it is it’s pretty much just a quirky bulky version of a regular usb 10Gbit enclosure.

Hope that can be fixed with a firmware update or something eventually

@Ray519

I just found this post of someone getting full bandwidth with the exact enclosure I have on a 7840u on egpu.io, so it may still be a framework issue.

Like I said, I am seeing this with a Maple Ridge desktop as well.
And I also confirmed this limiting of lanes happens with my Dell XPS 15 with Titan Ridge host controller under linux (lists as x2 (downgraded)).

I’d bet it depends a lot on the firmware of the ASM2464.
The 10.05. firmware failed to negotiate a TB connection and fell back to USB2 with that Dell host. But it is the only one Satechi provides and also seems to be the one that ships with the eGPU enclosures.

And with FW the only thing that our notebooks do in firmware should be the negotiation of the 40G connection, which works reliably. Everything else is probably controlled explicitly by the connection manager / Windows USB4 driver.

I am preparing a windows 11 to go drive so I can check if it is a linux issue. I’ll also compare the firmware version to the one on epgu.io cause other than the host being a legion go it pretty much exactly the same setup.

Well damn, I get full bandwidth on windows, 3.4GB/s in crystaldisk on windows 11. May be a linux issue then.

Think more in combinations. I’d also bet this is more like DP connections not reestablishing correctly after sleep etc. A whole chain of things that need to be reconfigured and it is not any one thing not working at all, just reacting slightly slower or differently than another part expects and then as a whole it misbehaves. And if you can change the setup ever so slightly it might fall into tolerances again and behave as expected.

Unlimately it go fast on windoze 11 and it not go at all on linux. Same setup, different os.

I did also just try every firmware I could find and a whole bunch of settings for them with no success. Everything works on windows, nothing but usb3 and lower works on linux.

Update: if I boot with the enclosure plugged in I get the ssd, but it’s capped at 2GB/s. Lspci reports 8GT/s x4 which is pcie3x4 (it is a pcie3 ssd, I have no pcie4 ones that aren’t in use) if I got that right and boltctl reports USB4 40Gbit but jet it maxes out at 2GB/s.

Update2: The 2GB/s cap could be related to this issue. The not hot-plugging right is probably something else.

The linked thread is about the AMD GPU driver not doing things correctly. Would not affect NVMes.
But I have no confirmed the following:
It attaching as USB3 device on my Maple Ridge TB4 host seems to be the hosts fault, as expected. When I disable ASPM on that mainboard again (as is the factory config), it works reliably with PCIe. It seems, not only causes this mainboard bluescreens if you activate ASPM for an Nvidia GPU it also messes with a bunch of devices behind TB/USB4. This Asus mainboard is just completely broken and they should be sued for how they disable and block power saving options and try to hide it, as that leads to default settings that actively break the law.

Secondly, the ASMedia app shows the ASM2464 consistently in TB3 mode when it only comes up with 2 lanes. This is also true behind my USB4 hub and for the USB4 host and the TB4 host.
Why it recognizes even a TB4 hub with the Windows USB4 drivers as TB3 might have sth. to do with the firmware on that hub, not sure. But either way, the forced limitation to x2 still seems problematic in and of itself (and I am guessing that is done by the ASM2464’s own firmware and might not have been true for older firmwares).

The problem there is that something takes the fake bandwidth the USB4 root reports literally, which is potentially the case here too.

There are a bunch of firmwares floating around you could try if you wanted to confirm that.

I have not noticed any difference between them but I only checked performance on win11 and if it would hotplug on linux.

I already tried a few.
That also changed the name of the device and Satechi themselves does not publish any, even the original 10.05. firmware it shipped with (that connected only with USB2 to my TB3 host).

The december firmwares (12.04. & 12.18.) fixed that. Currently on the 01.29. in the hopes that that would be the TB4 certified one, but unsure.
Questionable that Jeyi even depublished their firmwares for the ASM2464…

That could explain problems in linux, but not how my Windows hosts are affected.
With a native USB4 connection and x4 working, I still see the virtual x1 gen 1 connections in between. The PCIe topology with TB3 is identical. So no reason that the NVMe driver or sth. else would change its behavior, because it will only see PCIe and have no idea if that device is behind any TB/USB4 connection let alone what type of actual wire speed is used on that connection.


With TB4 hub in between, ASMtool shows TB3 mode, x2 lanes on actual SSD, x2 bandwidths

Without TB4 hub in between, ASMtool shows native USB4 mode, x4 lanes on actual SSD, x4 bandwidths.

On windows everything works fine on my end so I guess we don’t have the exact same issue.

Did manage to try with a pcie4 ssd (PM9A1) and that did 3.8GB/s read and 3.7GB/s write which afaik is pretty close to the theoretical limits of USB4. Unfortunately just on windows.

That isn’t an issue on windows but it would explain the dmesg entries about speed limitations.

Update: Holy moly hotpluging works with the other ssd.

Update2: Tried a bunch of other nvme ssds, all worked it’s just the 970 evo that doesn’t. The 2GB/s cap is still there though.

Very curios as 970 evo 500GB is the SSD I am using, because all my other (and Gen 4) SSDs are encrypted and inside my FW or deep in my desktop. I could get WD SN700 500G SSDs more easily, but those are also just Gen 3.

There is another thread with someone having the issue with a 970 evo plus, and iirc some pople had issues with them in realtec enclosures. That thing may just be cursed.

However my problems are apparently different from yours, I never had anything to do with 2x.

Have you found what is limiting your bandwidth instead? Not getting over 2 GiB/s seems suspiciously like a limitation to either 20 GBit/s or PCIe x2 Gen 3 or x4 Gen 2.

This line in dmesg looks pretty suspicious. But I have not seen anything about 2x in lspci or boltctl or dmesg or anywhere else I looked.

[ 2043.958225] pci 0000:64:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x1 link at 0000:00:04.1 (capable of 63.012 Gb/s with 16.0 GT/s PCIe x4 link)

It does kinda look like an artificial limit since I get 2.00GB/s on a dd read, 2 on the dot which is more than a pcie3x2 link has even before you factor in overhead and way less than a pcie4x2 has.

Update: It gets even weirder, I just upgraded to kernel 6.8.2 and it just read the whole drive (512GB PM9A1) at 2.4GB/s, so I guess it pretty much definitely not pcie3x2 in my case.