Awesome, thanks for the reply! Just curious, how did you get Genshin Impact working on Linux?
Some hacky workaround someone created online. Tricks the anti-cheat basically. The creator wants to keep the project hidden from the word of mouth so that the devs don’t pick up on it. However, it isn’t too hard to find the repository on Google.
I’m in the current batch to be shipped this month and intend to use Linux with eGPU. I’ll post my experience using an RX 5500XT as that’s what I’ve got on hand atm.
However I haven’t had any eGPU issues with other laptops running 11th Gen Intel.
Some updates on my end.
AMD eGPU support on the Framework seems a bit iffy. I currently have a problem where the laptop won’t post with the eGPU connected.
Meanwhile on the Linux side… AMD eGPUs definitely work more seemlessly on Wayland than xOrg, though I was able to get both to work for the most part.
Wayland first. Recognizing the eGPU itself was plug-and-play on EndeavourOS (Gnome40, Linux 5.14). Though it defaulted to all displays being enabled. On my main setup I have two displays connected to the eGPU and the laptop display itself. On Windows this works pretty seemlessly. On Linux, however, if I don’t disable either the laptop screen or both of the displays on the eGPU, the whole desktop gets laggy. Afterwords, I tried install gnome-egpu to see if that would change anything, but it didn’t.
Meanwhile, on xOrg. A rough experience to say the least. When the setup works, it works great. Once again, I had similar problems when I had all displays enabled. This time, however, occasionally an external display would stop working randomly. To remedy this, I had to install egpu-switcher. Doing so allowed me to manually select which gpu I would prefer to use as my primary display. Internal Display → iGPU. External Display → eGPU. Unfortunately, this does remove any sort of hotplug support or easy switching between the two (restart required).
Speaking of hotplug support. It sometimes works on Wayland and only works on xOrg when only using the internal display with iGPU primary.
That’s disappointing to say the least, ideally that’s the setup I would run with
I’ve had reasonably decent results with my AMD eGPU, except random unexplainable freezing running things over Proton (or even Minecraft). But it’s been more or less plug and play with an X11 Ubuntu 20.04 LTS setup.
I tried to get the 7900 xtx working on AMD without much luck. The trick of simply creating a udev rule like with an nvidia GPU didn’t work. I don’t know if it was a conflict with internal dedicated GPU, but yeah it didn’t work.
I successfully managed to use an AMD RX 580 with the Framework 13 i7-1280P using an EXP GDC TH3P4G3 (buy link on egpu.io).
I’m running Pop!_OS 22.04 LTS under Wayland.
In my case, I had to force Wayland to use the eGPU as primary using all-ways-gpu otherwise apps would still start on Intel graphics. Since my setup has 2 monitors (1440p 165 Hz and 1080p 60 Hz) and the dock is meant to be fixed at the desk, I didn’t mind making this switch.
It also seems that for my dock/GPU combo I had to force the GPU to use PCIe 3.0 speeds instead of PCIe 1.1 Don’t ask me why that’s happening, I have no idea. I used the guide from all-ways-gpu here.
Only problem I stumbled upon is that the laptop doesn’t want to go to sleep with the dock connected (even without a GPU inserted in the dock). Made a post about it here.
EDIT: I managed to fix the sleep problem.
Nice to hear the th3p4g3 works with amd and Framework.
I recently bought it with a amd 6600 to use with linuxmint and the Framework 13 1340p.
Somehow i can only use it at pcie 1.1 speeds (2,5gt) instead of the 8gt with tb4 and pcie3x4.
Couldnt manage to fix this Problem yet. I even installed Pop_Os like @TermoZour . Maybe he can help
Did try the above link with the modprobe file but still:
3:04.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge DD 2018] (rev 06)
The speeds advertised by tools like lspci and shown in the kernel logs are wrong at best and confusing at worst.
There is a fix in the GPU driver in kernel 6.8 that helps the performance (but not the reporting to those tools and logs). Please see the commit message for more details.
Is there something similar for nvme? Pretty sure that has a similar problem with usb4 enclosures.
The problem in the GPU driver is that it was artificially limiting the GPU performance based on evaluation of each link speed in the topology.
The idea was don’t use higher performance than the slowest link. This generally works for desktops but when you have virtual links that dynamically change speeds like USB4 but the register values are fixed it’s wrong.
I am not aware of any other drivers doing this but you can check if the nvme stack does this too.
I am pretty sure something is, since with the same enclosure I get 3.8GB/s read on windows and only about 2.8GB/s on linux (and 2.0 a few kernel versions ago). Also there are some lines in dmesg about bandwidth being limited.
The dmesg line should be a red herring.
Can you raise a bug on the kernel bugzilla for the nvme issue? I think it will need nvme, PCI and USB4 CM maintainer comments for where in the stack they think the issue really is.
Not entirely sure how to do that, miay look into it later.
I posted the full dmesg here: AMD Framework and NVMe SSD Enclosure Compatibility Investigation - #38 by Adrian_Joachim
There is also the weird bit about one speciffic ssd (970 evo) not hotplugging only on linux but I am pretty sure those are separate issues.
You can file both of them here:
https://bugzilla.kernel.org/
I think you can mark them against NVME initially, but they might need to loop in people from PCI and USB4 after initial triage from the NVME guys.
Turns out I just tested wrong, apparently dd isn’t enough to saturate the ssd. I used kdiskmark which kooks like a pretty good analog to crystaldiskmark and that one does full bandwidth on linux. False alarm.
Anyone else also experiencing weird complete freezes on hot unplug?
Found this smiliar issue, same behavior but only on hot-unplug
you mean on Linux, I presume? Generally, before unplugging your eGPU you should first make sure no processes run on it (on Nvidia cards you can use nvidia-smi
command, not sure what AMD’s counterpart is), otherwise it will crash badly. With no processes running it should unplug cleanly, if it doesn’t please post some logs and more detailed info here and on egpu.io.
Yes I am on linux, in fact I am the author of the last comment on the issue I linked,
I updated with what seems to be a reproducible way to trigger the bug
I couldn’t find a similar tool for amd though (there’s rocm-smi but it doesn’t seem to have that feature)