Fedora already defaults to pipewire, for distributions that don’t, it may be worthwhile to migrate to it.
My observations were a reduction in avg draw on battery of 7W+ to < 5W for my std daily workloads. On my FW13 7640u with the 55W battery, this equates to an increase in runtime from ~8 hrs to over 11 hrs.
The Phoenix/Ryzen 4 architecture still has upstream optimisations in process or trickling down to the latest kernel builds. Which again leaves the opportunity for longer runtimes as this become part of distributions’ standard kernels.
The MediaTek mt7921 wifi driver also awaits optimisation as large downloads can draw impressively high power.
Initially, using powertop to calibrate and auto-tune seemed advisable. However after testing the above settings with and without auto-tune, there seems to be little advantage.
Likewise, my initial testing showed TLP reduced draw more than PPD. However, with the combination of settings above, this difference dissipated. YMMV.
Yes, my default is to turn off bluetooth when not in use. I take powertop’s readings with a grain of salt. To understand what something is really using, I’ll typically enable it for awhile (my test target has been 10% battery usage to smooth out fluctuations) then disable it for the same period. Then calculate and compare avg power draw. I’ve found the delta to be more accurate than the W usage reported in powertop.
If your use case includes keeping bluetooth enabled, you may wish to set the IdleTimeout to lower energy consumption when not in use.
Hardware acceleration does help with higher resolutions and I do recommend keeping it on as software decoding only outperforms it below 1080p and even there not by much. The problem is just that the hardware decoder at this point uses way more power than it should (or does on windows) so video playback still makes quite a big difference.
It really depends on the workload if this will have a good battery impact for the system or not. The APU generally performs most efficiently with all cores active. This is because of common IP for all cores that need to be lit up for any cores to be active.
For example if you have a multi-threaded task that nominally takes 30s when equally split across 4 cores and you take 2 cores offline that same will logically take ~45s.
Your outcome will be:
The task taking 50% longer.
The common blocks being active longer.
The individual cores being inactive longer.
My hypothesis is that running a multi threaded workload you’ll not be any better off but if you’re running single threaded workloads across a bunch of cores you will be.
I think to really prove this out what you’ll want to do is come up with a few repeatable workloads that are representative that you can script and benchmark.
Also; if you haven’t already applied it you should consider applying the amd-pstate preferred core patch series. This will make sure that the scheduler is biased to the most efficient cores.
I have a script that switches to power saver when I disconnect from power supply, plus some wrappers around my build scripts that set performance on build, then drop back to power saver if on battery. I also leave libvirtd and associated bridges disabled by default, enabling them when needed, mostly during functional testing multiple VMs, then disabling them (though I think it doesn’t help once they’ve been activated?). All of this is probably, more-likely, most definitely overkill, but it’s what I do.