Thunderbolt technology has typically been a forerunner to more advance functionality than USB introduced by Intel and their special interest group around thunderbolt products and the vendors that use them and make hardware. More better, so everyone gets excited. TB has long since offered more bandwidth and more features (such as pcie and displayport extension) than plain old USB iterations offered at the time, but originally started off using their own proprietary connectors beyond what the USB (and Firewire) spec offered (see old macs).
Eventually everyone that isn’t Intel and their sycophants eventually catch up with a more broadly available USB spec that isn’t license and royalty-encumbered as TB is to Intel directly (thus over AMD’s cold dead body will you see a TB port on an AMD board unless a mobo vendor goes rogue).
Looking further back, USB (Universal Serial Bus) came about to overcome bandwidth limits to at the time traditional Serial Bus could offer over any distance more than a few inches, with a better smaller connector than an old DB9. It’s just evolution of one external bus or another for connecting things.
USB is also designed to be a sharable bus with a certain amount of oversubscrption and buffering built in where PCIe is not, but this results in potential overload and performance will suffer as a result to all shared services when overloading a buffer. These use dedicated controllers and drivers (software) to provide an interface to hardware connecting, so tend to vary in quality and capabilities as a layer of complication PCIe doesn’t have with set 1:1 bandwidth and lanes built for close on-board transmissions.
The best bandwidth and latency for a PC to connect anything has always been their local motherboard busses such as PCI and later PCI-E (an evolution of ISA and others prior), but distance and cable limitations haven’t traditionally allowed for an extension beyond a few inches, thus the aforementioned busses like Serial and USB were more suitable, but slower and less bandwidth as a sacrifice.
Evolution in high-speed busses have evolved in general with PCI-E (and newer CXL) busses to be transportable in longer distances at full lane speeds, but still shorter than USB/TB. Occulink branding is simply a suitable pluggable cable interface (male+female ends) format that’s been standardized for use, and when combined with suitably shielded wiring to prevent interference, can be transported up to 1 Meter theoretically. The SFF connectors used began life for SATA 4-to-1 breakout cables, but proved useful for PCIe as well.
The ultimate evolution of pluggable busses (currently) is Nvidia’s NVLink where they’re using 800Gb interfaces to interconnect GPU clusters between racks and rows in data centers to NVLink switches, or basically just low-latency Ethernet (or Infiniband to be specific).
… or close enough. At least that’s my 2 cents on PC history.
I’m more sad manufacturers seemed to figure out 4x PCIe lane extension was possible (and desired by consumers) on given external connector and cabling lanes, but then got lazy and stopped there beyond extending to 8x or full 16x lanes in an external cable/connector/docks to crank out low end disposable and limited low-end egpu’s and mini-pc’s. Gimme the full lanes dammit, especially when the GPU’s are larger than the PC’s now and likely to snap the socket off!