In M1, all Thunderbolt ports are a separate Thunderbolt controller. In Alpine Ridge, Titan Ridge, Maple Ridge, Ice Lake, and Tiger Lake, a Thunderbolt controller has two ports (usually two per side of a laptop in the case of Ice Lake and Tiger Lake). Each Thunderbolt controller has two DisplayPort In Adapters, so in Ice Lake and Tiger Lake, you can only connect two displays per side.
With integrated Thunderbolt implementations such as M1, Ice Lake, and Tiger Lake, I believe there’s no division of PCIe bandwidth because the upstream of the Thunderbolt controller isn’t really PCIe. So you can software RAID-0 two Thunderbolt NVMe together and get the same forty-something Gbps bandwidth from any two Thunderbolt ports but you can’t get 22Gbps from all ports at the same time (at least in the case of Ice Lake - I haven’t seen benchmarks for M1 or Tiger Lake).
In your case, Instead of two Thunderbolt NVMe, you want to connect a dock and an eGPU. Like I said above, you won’t see a difference no matter which two ports you select. If you look at Device Manager in Windows and view by Connection Type (or use HWiNFO), you’ll see that each Thunderbolt port is a separate PCIe root port (but it’s not really PCIe so you can ignore the ridiculous/inaccurate/meaningless PCIe 1.0 2.5 GT/s link speed).
You’ll definitely get better performance if the display you are outputting to is connected directly to the GPU in the eGPU enclosure. See the eGPU.io website for benchmarks and info.
All eGPUs are Thunderbolt 3 (Alpine Ridge or Titan Ridge) so their USB is controlled by their USB controllers which will use tunnelled PCIe. I don’t think I’ve seen a tunnelled USB on Tiger Lake (using a Thunderbolt 4 dock/hub) so a screenshot of Device Manager or HWiNFO or USBTreeView would be interesting.
The dual HBR3 problem with Titan Ridge is similar to the dual HBR2 problem with Falcon Ridge (Thunderbolt 2). Alpine Ridge is easy since it can only do HBR2 and two HBR2 can’t exceed 40 Gbps. But if you connect with a 20 Gbps cable, then you have the Falcon Ridge problem.
It’s a limit of PCIe data through Thunderbolt. 22Gbps is one of the earliest numbers given by Intel in their marketing materials. It is 2750 MB/s which is a number often seen on the product pages for Thunderbolt NVMe storage devices. This is sometimes rounded up to 2800 MB/s. There have been benchmarks that reach 3000 MB/s (24 Gbps) and sometimes exceed that so the max is something less than 25 Gbps. The difference between 22 and 25 Gbps may be a setting in the firmware of the Thunderbolt peripheral. For example, eGPU enclosures had unexpected lower H2D or D2H bandwidth (host to device or device to host) benchmarks as measured by CL!ng.app in macOS or AIDA64 GPGPU Benchmark or CUDA-Z in Windows. A firmware update of the eGPU enclosure would often correct this. But in the USB4 spec there’s mention of a tradeoff between bandwidth and latency in regards to PCIe and USB tunnelling. So maybe the eGPUs with the lower bandwidth were tuned more for latency?
A Thunderbolt 3 host controller (Alpine Ridge or Titan Ridge) has a max upstream link of PCIe 3.0 x4 (31.5 Gbps) same as Maple Ridge (Ice Lake and Tiger Lake don’t have this limit since they are integrated Thunderbolt controllers). The host controller is a PCIe device, so really, you can connect it to any PCIe connection all the way down to PCIe 1.0 x1 (2 Gbps). Some PC laptops used PCIe 3.0 x2 (15.75 Gbps) (some Mac laptops too) or PCIe 2.0 x4 (16 Gbps) which saves power or lanes. Regardless of the PCIe limit, the Thunderbolt controller can still do 34.56Gbps of DisplayPort if it has two DisplayPort connections to the GPU so it’s not a complete waste.
There’s two Fresco Logic FL1100 USB controllers which each use a PCIe 2.0 x1 connection (5 GT/s * 8b/10b = 4 Gbps PCIe, or 5 Gbps * 8b/10b = 4 Gbps USB). The ASMedia ASM1142 USB controller uses a PCIe 3.0 x1 connection (8 GT/s * 128b/130b = 7.877 Gbps). The USB controller in the Alpine Ridge Thunderbolt controller is not limited by PCIe (10 Gbps * 128b/132b = 9.7 Gbps USB). Actually, in regards to the FL1100, although both PCIe and USB are limited to 4 Gbps, the extra PCIe overhead compared to USB overhead makes it slower than USB 3.0 from the ASM1142 or Alpine Ridge. In regards to the differences between 5 and 10 Gbps USB, the 10 Gbps USB is more than twice as fast because it uses a more efficient encoding.
Alpine Ridge and Titan Ridge in a Thunderbolt peripheral have 4 PCIe 3.0 downstream lanes. They can be divided by the manufacture as x1x1x1, x2x1x1, x2x2, or x4. The TS3+ uses x1x1x1x1 and the fourth PCIe lane to the Alpine Ridge Thunderbolt controller is used by an Intel Gigabit Ethernet controller. A PCIe device can connect using any link rate up to the max supported by the PCIe device and the root port or downstream bridge device it is connected to. You can change the link rate while they are connected (using pciutils) so you can simulate a lower speed device if you like. On a Mac Pro 2008, a PCIe 3.0 device installed in a PCIe 2.0 slot will boot as PCIe 1.0 link rate but you can set it to PCIe 2.0 using pciutils.
Maybe there are situations where having more USB controllers is beneficial. It depends on the capabilities of the USB controller that would be used for USB tunnelling compared to the USB controller in the downstream Thunderbolt peripheral.
I think it’s just following the USB-C Alt Mode spec.
The Thunderbolt 3 device doesn’t do USB so it shouldn’t try to do USB except to provide a USB 2.0 billboard device in case the USB-C port can’t switch from USB mode to Thunderbolt alt mode. Windows will see the billboard device and display a message. The same may exist in USB-C to DisplayPort adapters and cables.
The only eGPUs that use the DisplayPort In Adapter of their Thunderbolt controller are the Blackmagic eGPUs and the Sonnet eGPU Breakaway Puck RX 5500 XT/5700. I don’t know if these work with Windows. I don’t know if the DisplayPort from the eGPU can be tunnelled upstream or only downstream (I guess it’s a decision made by Apple’s software Thunderbolt connection manager). I don’t know if the DisplayPort from the host Thunderbolt controller can be tunnelled downstream of the eGPU such that there are 4 tunnelled DisplayPort connections downstream of the eGPU.
I do know one DisplayPort tunnelling trick where the DisplayPort from one host is tunnelled to the DisplayPort of another host’s Thunderbolt controller. This is called Thunderbolt Target Display Mode which allows an old Thunderbolt iMac (1440p) to be a display for another Mac. So it seems you can have software that pokes some arbitrary DisplayPort paths even across domains. The iMac though has a switch to move its display connection from its GPU to a DisplayPort Out Adapter of its Thunderbolt controller so it can receive the tunnelled DisplayPort. I don’t know if the DisplayPort tunnelling path could extend deeper into the iMac’s Thunderbolt domain.