You said you’re using arch, I don’t see a reason why that would not work under arch. No need for a live distro
If I can boot into it, yes.
Sounds like I can, w/o accelerated graphics (which is fine - boot to the console.)
Everyone is welcome to use what works best for them. However our officially supported distros remain a thing when seeking ticketed support. For this hardware release, the supported distros are defined in the original post.
Delighted to see excitement here and we’re excited to see everyone in batch 1 getting their laptops here soon.
The UEFI updater would be your choice, then. Of course, the track record on parallel releases of firmware updates hasn’t been the best lately, so I’m not sure if I’d count on it (the latest firmware for Intel 11th Gen has been released about three weeks ago via a Windows installer, four weeks if you count the beta, and so far that’s been the only available version).
Yes, a UEFI installation would be my preferred path.
5 posts were merged into an existing topic: AMD Batches
Batch conversations, please move to the batch thread linked above this sentence. Thanks
Am I correct in saying that the regressed firmware will work without the freezing on the shipped 3.0.2 bios on the F39 beta? And the hope is once the rev’d BIOS is released, F38 will work as well?
Correct. 3.0.2 works out of the box with F39 Beta. 3.0.3 works out of the box with F38 or F39 (among other distros).
Vulkan allows dynamic allocation via GTT but normal ROCm implementations do not. There is a custom PyTorch allocator someone whipped up but I didn’t try it. Note, GPU perf can be worse than CPU performance. I’ve done a fair amount of poking around with a 7940HS. Here’s a summary of results: AMD GPUs
Overall, IMO the performance even when using the GPU isn’t anything to get excited about since the memory bandwidth is so bottlenecked.
Please keep this one bookmarked - batch 1 folks, we need your participation if you’re interested in lending a hand testing. Went smoothly via LVFS for me.
Moved to AMD Ryzen 7040 Series BIOS 3.03 and Driver Bundle Beta
Vulkan allows dynamic allocation via GTT but normal ROCm implementations do not. There is a custom PyTorch allocator someone whipped up but I didn’t try it. Note, GPU perf can be worse than CPU performance. I’ve done a fair amount of poking around with a 7940HS. Here’s a summary of results: AMD GPUs | llm-tracker
Overall, IMO the performance even when using the GPU isn’t anything to get excited about since the memory bandwidth is so bottlenecked.
Good info, thanks. My use case is more focused on development than inference though. Particularly, I’m hoping the iGPU can run a few test “sanity” steps more quickly than the CPU, to enable faster test iterations during the development cycle. Then I’ll offload the model to another machine for the actual training run. Any experience with this chipset for that use case?
My use case is more focused on development than inference though. Particularly, I’m hoping the iGPU can run a few test “sanity” steps more quickly than the CPU, to enable faster test iterations during the development cycle. Then I’ll offload the model to another machine for the actual training run. Any experience with this chipset for that use case?
You might get a slight speedup over CPU from GPU’s efficiency, or you might lose performance since it needs to transfer data across system memory so I guess you’ll just have to test on your own workload. Personally, I use a dedicated 4090/3090 dual GPU workstation for my local dev work - I don’t think the AMD APU is particularly suited for ML work unless the use case is lightweight enough where CPU vs GPU doesn’t matter so much.
Vulkan allows dynamic allocation via GTT but normal ROCm implementations do not. There is a custom PyTorch allocator someone whipped up but I didn’t try it. Note, GPU perf can be worse than CPU performance. I’ve done a fair amount of poking around with a 7940HS. Here’s a summary of results: AMD GPUs | llm-tracker
Stuff like this is exactly what I meant.
You might get a slight speedup over CPU from GPU’s efficiency, or you might lose performance since it needs to transfer data across system memory so I guess you’ll just have to test on your own workload. Personally, I use a dedicated 4090/3090 dual GPU workstation for my local dev work - I don’t think the AMD APU is particularly suited for ML work unless the use case is lightweight enough where CPU vs GPU doesn’t matter so much.
Good to know, thanks. I didn’t consider the backplane bottleneck potential. I got that workstation setup too, was just hoping for something more portable that was still reasonably fast enough for dev. Guess that’s what’s ssh is for.
You might get a slight speedup over CPU from GPU’s efficiency, or you might lose performance since it needs to transfer data across system memory so I guess you’ll just have to test on your own workload. Personally, I use a dedicated 4090/3090 dual GPU workstation for my local dev work - I don’t think the AMD APU is particularly suited for ML work unless the use case is lightweight enough where CPU vs GPU doesn’t matter so much.
I recently spent some time playing with Stable Diffusion on a ThinkPad P14s, equipped with the AMD Ryzen 7 PRO 7840U and running Arch, and the GPU performance was significantly better than the CPU one. Generating 512x512 images took around 10 seconds per iteration on the CPU, whereas the GPU was doing more than an iteration per second, and generating multiple images per batch would get even better performance per image. True, nothing to get too excited about, but, at least, usable.
The memory on the P14s was the faster LPDDR5X-6400MHz, so the Framework’s DDR5-5600 might not perform quite so well. But, personally, I feel optimistic,
I believe the relevant boot parameter is
amdgpu.gttsize
(specified in binary megabytes, defaulting to -1 for RAM/2): Module Parameters — The Linux Kernel documentation .
Unfortunately, amdgpu.gttsize
only sets the upper limit, so it can be used to reduce how much memory the iGPU is allowed to use, not to increase it. And, as @lhl points out, ROCm doesn’t currently support GTT anyway.
However - and this seems to be supported by @lhl’s results - having a larger UMA frame buffer size set in BIOS would both make more memory available to ROCm, and improve the performance. I didn’t find any indications that 7840U’s architecture should not be capable of using 16GB UMA or more. I am really hoping Framework’s BIOS will allow us to go beyond the 8GB limit when setting the UMA size. The BIOS on P14s currently doesn’t, but my understanding is that there are ones that already do.
Actually - that would be my main question to @Matt_Hartley: What options for setting the UMA frame buffer size does the BIOS currently provide?
Actually - that would be my main question to @Matt_Hartley: What options for setting the UMA frame buffer size does the BIOS currently provide?
As we’re going to see multiple BIOS updates described previously, I don’t know at this time. This would be a question for once we’re in a good working state with Linux support, then we’d pose the question to the engineering team.
Note that 7040 Series is still a very new platform and AMD’s open source teams will continue to actively develop and improve Linux kernel driver support beyond this specific firmware fix. We’ll keep updating our guides to point you to recommended configurations, and we’ve created a Community wiki post (here) with an overview of the latest status.
I’m running Arch and everything is working fine in normal use cases which is great. I do have two display related issues though.
- Sometimes when I plug in or unplug an extra monitor the screen starts flickering white or going completely white. I’m currently using KDE Plasma x11, but also had the issue in the Wayland version.
- If I’m trying to power two external monitors sometimes that works but most of the time I can only get the one plugged directly into the computer over the HDMI port to work. The other I’ve plugged in though a thunderbolt docker which also has my keyboard/mouse on it and those work but the display never shows up. I have also tried a seperate USBC->HMDI dongle and that doesn’t work either. Sometimes though if you unplug and plug in again enough it shows up. Do I just need to get a second HDMI/DP port on my computer, if so that’s fine.
I’m willing to try Fedora and a different DE to see if that fixes it but was curious if anyone else had this issue or has suggestions.
I’m willing to try Fedora and a different DE to see if that fixes it but was curious if anyone else had this issue or has suggestions.
Would be interested in this as a comparable. GNOME preferable as that is what we test against at this time. I know, I know it should not matter - I’ve found historically, it can matter. There are differences.
Would be interested in this as a comparable. GNOME preferable as that is what we test against at this time. I know, I know it should not matter - I’ve found historically, it can matter. There are differences.
Tested on Gnome in Fedora in a live ISO and everything worked fine right away off a single thunderbolt cable into a dock. Now I reboot back into Arch and it seems to be working too. There were some updates in Arch today maybe those fixed them or maybe it’s a fluke. Going to see if the flickering or white screen issue is fixed too, hasn’t happened yet.
Tested on Gnome in Fedora in a live ISO and everything worked fine right away off a single thunderbolt cable into a dock. Now I reboot back into Arch and it seems to be working too. There were some updates in Arch today maybe those fixed them or maybe it’s a fluke. Going to see if the flickering or white screen issue is fixed too, hasn’t happened yet.
Appreciate the update. Docks are the bane of my existence as 99.99% of the time it relates to the symptoms you’ve described. More often than not, I see docks not playing nicely, but not consistently bad. Brand name docks “should” be fine, but I have seen experiences where it can go sideways. Worth watching and noting the logs for hints that the dock contributes to any issues.
Myself, I always, always recommend video using expansion cards (HDMI and DP) with docks sticking to USB-A/C duties exclusively. But I know it’s not ideal. For my USB related needs, Anker has always been good to me.