Yes, a UEFI installation would be my preferred path.
Framework Laptop 13 (AMD Ryzen 7040Series) - Geekbench
Benchmark results for a Framework Laptop 13 (AMD Ryzen 7040Series) with an AMD Ryzen 5 7640U processor.
Yes, a UEFI installation would be my preferred path.
5 posts were merged into an existing topic: AMD Batches
Batch conversations, please move to the batch thread linked above this sentence. Thanks
Am I correct in saying that the regressed firmware will work without the freezing on the shipped 3.0.2 bios on the F39 beta? And the hope is once the rev’d BIOS is released, F38 will work as well?
Correct. 3.0.2 works out of the box with F39 Beta. 3.0.3 works out of the box with F38 or F39 (among other distros).
Vulkan allows dynamic allocation via GTT but normal ROCm implementations do not. There is a custom PyTorch allocator someone whipped up but I didn’t try it. Note, GPU perf can be worse than CPU performance. I’ve done a fair amount of poking around with a 7940HS. Here’s a summary of results: AMD GPUs
Overall, IMO the performance even when using the GPU isn’t anything to get excited about since the memory bandwidth is so bottlenecked.
Please keep this one bookmarked - batch 1 folks, we need your participation if you’re interested in lending a hand testing. Went smoothly via LVFS for me.
Moved to AMD Ryzen 7040 Series BIOS 3.03 and Driver Bundle Beta
Vulkan allows dynamic allocation via GTT but normal ROCm implementations do not. There is a custom PyTorch allocator someone whipped up but I didn’t try it. Note, GPU perf can be worse than CPU performance. I’ve done a fair amount of poking around with a 7940HS. Here’s a summary of results: AMD GPUs | llm-tracker
Overall, IMO the performance even when using the GPU isn’t anything to get excited about since the memory bandwidth is so bottlenecked.
Good info, thanks. My use case is more focused on development than inference though. Particularly, I’m hoping the iGPU can run a few test “sanity” steps more quickly than the CPU, to enable faster test iterations during the development cycle. Then I’ll offload the model to another machine for the actual training run. Any experience with this chipset for that use case?
My use case is more focused on development than inference though. Particularly, I’m hoping the iGPU can run a few test “sanity” steps more quickly than the CPU, to enable faster test iterations during the development cycle. Then I’ll offload the model to another machine for the actual training run. Any experience with this chipset for that use case?
You might get a slight speedup over CPU from GPU’s efficiency, or you might lose performance since it needs to transfer data across system memory so I guess you’ll just have to test on your own workload. Personally, I use a dedicated 4090/3090 dual GPU workstation for my local dev work - I don’t think the AMD APU is particularly suited for ML work unless the use case is lightweight enough where CPU vs GPU doesn’t matter so much.
Vulkan allows dynamic allocation via GTT but normal ROCm implementations do not. There is a custom PyTorch allocator someone whipped up but I didn’t try it. Note, GPU perf can be worse than CPU performance. I’ve done a fair amount of poking around with a 7940HS. Here’s a summary of results: AMD GPUs | llm-tracker
Stuff like this is exactly what I meant.
You might get a slight speedup over CPU from GPU’s efficiency, or you might lose performance since it needs to transfer data across system memory so I guess you’ll just have to test on your own workload. Personally, I use a dedicated 4090/3090 dual GPU workstation for my local dev work - I don’t think the AMD APU is particularly suited for ML work unless the use case is lightweight enough where CPU vs GPU doesn’t matter so much.
Good to know, thanks. I didn’t consider the backplane bottleneck potential. I got that workstation setup too, was just hoping for something more portable that was still reasonably fast enough for dev. Guess that’s what’s ssh is for.
You might get a slight speedup over CPU from GPU’s efficiency, or you might lose performance since it needs to transfer data across system memory so I guess you’ll just have to test on your own workload. Personally, I use a dedicated 4090/3090 dual GPU workstation for my local dev work - I don’t think the AMD APU is particularly suited for ML work unless the use case is lightweight enough where CPU vs GPU doesn’t matter so much.
I recently spent some time playing with Stable Diffusion on a ThinkPad P14s, equipped with the AMD Ryzen 7 PRO 7840U and running Arch, and the GPU performance was significantly better than the CPU one. Generating 512x512 images took around 10 seconds per iteration on the CPU, whereas the GPU was doing more than an iteration per second, and generating multiple images per batch would get even better performance per image. True, nothing to get too excited about, but, at least, usable.
The memory on the P14s was the faster LPDDR5X-6400MHz, so the Framework’s DDR5-5600 might not perform quite so well. But, personally, I feel optimistic,
I believe the relevant boot parameter is
amdgpu.gttsize
(specified in binary megabytes, defaulting to -1 for RAM/2): Module Parameters — The Linux Kernel documentation .
Unfortunately, amdgpu.gttsize
only sets the upper limit, so it can be used to reduce how much memory the iGPU is allowed to use, not to increase it. And, as @lhl points out, ROCm doesn’t currently support GTT anyway.
However - and this seems to be supported by @lhl’s results - having a larger UMA frame buffer size set in BIOS would both make more memory available to ROCm, and improve the performance. I didn’t find any indications that 7840U’s architecture should not be capable of using 16GB UMA or more. I am really hoping Framework’s BIOS will allow us to go beyond the 8GB limit when setting the UMA size. The BIOS on P14s currently doesn’t, but my understanding is that there are ones that already do.
Actually - that would be my main question to @Matt_Hartley: What options for setting the UMA frame buffer size does the BIOS currently provide?
Actually - that would be my main question to @Matt_Hartley: What options for setting the UMA frame buffer size does the BIOS currently provide?
As we’re going to see multiple BIOS updates described previously, I don’t know at this time. This would be a question for once we’re in a good working state with Linux support, then we’d pose the question to the engineering team.
Note that 7040 Series is still a very new platform and AMD’s open source teams will continue to actively develop and improve Linux kernel driver support beyond this specific firmware fix. We’ll keep updating our guides to point you to recommended configurations, and we’ve created a Community wiki post (here) with an overview of the latest status.
I’m running Arch and everything is working fine in normal use cases which is great. I do have two display related issues though.
I’m willing to try Fedora and a different DE to see if that fixes it but was curious if anyone else had this issue or has suggestions.
I’m willing to try Fedora and a different DE to see if that fixes it but was curious if anyone else had this issue or has suggestions.
Would be interested in this as a comparable. GNOME preferable as that is what we test against at this time. I know, I know it should not matter - I’ve found historically, it can matter. There are differences.
Would be interested in this as a comparable. GNOME preferable as that is what we test against at this time. I know, I know it should not matter - I’ve found historically, it can matter. There are differences.
Tested on Gnome in Fedora in a live ISO and everything worked fine right away off a single thunderbolt cable into a dock. Now I reboot back into Arch and it seems to be working too. There were some updates in Arch today maybe those fixed them or maybe it’s a fluke. Going to see if the flickering or white screen issue is fixed too, hasn’t happened yet.
Tested on Gnome in Fedora in a live ISO and everything worked fine right away off a single thunderbolt cable into a dock. Now I reboot back into Arch and it seems to be working too. There were some updates in Arch today maybe those fixed them or maybe it’s a fluke. Going to see if the flickering or white screen issue is fixed too, hasn’t happened yet.
Appreciate the update. Docks are the bane of my existence as 99.99% of the time it relates to the symptoms you’ve described. More often than not, I see docks not playing nicely, but not consistently bad. Brand name docks “should” be fine, but I have seen experiences where it can go sideways. Worth watching and noting the logs for hints that the dock contributes to any issues.
Myself, I always, always recommend video using expansion cards (HDMI and DP) with docks sticking to USB-A/C duties exclusively. But I know it’s not ideal. For my USB related needs, Anker has always been good to me.
Got my AMD 13 an hour or so ago. I installed Mint 21.2 Edge and so far it’s worked great out of the box.
Will keep playing around and post if/when I see any major issues come up.
Edit:
The fingerprint reader does not work out of the box, which seems to be a known issue. The firmware out of the box does not support it and fwupd fails to even start. It’s running version 1.7.9.
Edit 2:
Change the kernal 6.1.0-1023-oem as recommended. This does seem to make the system feel a bit more snapy. Ran geekbench for a baseline. Link below. But the numbers look great from what I see.
Also I was able to get fwupd running now as well. Will see if I can get the finger print reader working next when I get some time.
Benchmark results for a Framework Laptop 13 (AMD Ryzen 7040Series) with an AMD Ryzen 5 7640U processor.
Edit 3:
Got the fingerprint read to work after a few steps. Looks like it is a new firmware for the fingerprint sensor itself. First needed to follow “If the devices is not detected” part of this framework guide.
# IMPORTANT - Before trying this, please try these guides FIRST.
This guide is for 13th Gen Intel Core and AMD Ryzen 7040 Series
https://knowledgebase.frame.work/en_us/search?q=Fingerprint+troubleshooting
If you are willing to try this now and accept that this may not work and you may end up waiting for the LVFS update anyway, follow below step by step.
## Install fwupd (May already be installed)
Ubuntu LTS
``
sudo apt update && sudo apt install fwupd -y
``
Fedora
``
This file has been truncated. show original
Then I installed libpam with:
sudo apt install libpam-fprintd
I could then run fprintd-enroll
to get my right finger added.
Finished up by running sudo pam-auth-update
Did a reboot and could now log in using my fingerprint.
Awesome. You will absolutely want to make some kernel adjustments on AMD. While unofficial, this should still apply as it’s still based on Ubuntu 22.04.
NOTE: This is NOT official and is done as I need folks on Mint using the correct kernel. 6.2 is not going to be amazing. The recommended, tested, supported OEM C kernel will be far, far better.
For ticketed support and official vetting, we ask users to run Ubuntu 22.04.3 over Mint.
If you need to use Mint and understand I provided no support here to this, here is your best experience below.
sudo apt install linux-oem-22.04c
Reboot
sudo nano /etc/default/grub
Back up GRUB
sudo cp /etc/default/grub /etc/default/grub.bak
Right now, the AMD target kernel we want folks on is 6.1.0-1023-oem - but this may evolve in the future. 6.1.0-102x-oem in the future. We use a Zenity alert tool on Ubuntu for this. You will need to do this manually or customize it yourself.
Again, this mini-unofficial guide is for Linux Mint, not other distros.
GRUB_DEFAULT="0"
into
GRUB_DEFAULT="Advanced options for Linux Mint 21.2 Cinnamon>Linux Mint 21.2 Cinnamon, with Linux 6.1.0-1023-oem""
sudo update-grub
Reboot
I’m expecting my batch 1 AMD to arrive tomorrow, question about Ubuntu: would Ubuntu 23.10 work or should I be sticking to 22.04.3?
I’m expecting my batch 1 AMD to arrive tomorrow, question about Ubuntu: would Ubuntu 23.10 work or should I be sticking to 22.04.3?
How to install Ubuntu 22.04 LTS Linux on a Framework Laptop 13
As indicated the batch email with the specific distro recommendations. You will have the best experience following 22.04.3 and using step 4 of the provided guide.