[TRACKING] Linux battery life tuning

I’ve only seen 2.3W is with screen off.

So that one is a typo? You meant 3.3 or something?

1 Like

Yes or an outlier that stuck in my head from a series run. 3.5W with what is around currently seems pertty obtainable at idle, 4W seems more normal. Actually doing things 5-6W.

1 Like

Well that put a bit of a damper on my enthusiasm to try the 6.7 rc, probably still will though.

1 Like

I may have missed something, are we expecting major improvements on AMD chips with kernel 6.7?

as there’s a lot of you guys trying all the things for tuning, make sure you take a look at this:
Adaptive Backlight Management (ABM) - Framework Laptop 13 / Linux - Framework Community

Gonna give this a shot right after I rerun my test on the 6.7 rc so I don’t test too many variables at one.

Though the difference between screen at min and screen at my testing 20% is less than 1W so the possible gains there are limited. But I take what I can XD.

Given how it works I’d expect the bigger improvement when you have brightness “set” higher it’s not as much of a trade off for power consumption. But yes you need to capture brightness level and use the same “content” to confirm it.

The AMD testing that shows that number I quoted is specifically measured with video playback.

I have a pretty standardized test plan at this point, especially since I am testing something with enough external variables it is important to keep as many variables at possible under control. Same youtube videos, same local files and so on.

Well good thing a large portion of my test list is video playback XD

6.7 rc3 alone (with stock tlp config) Made absolutely no difference in my tests.

On another unrelated datapoint, hardware accelerated video playback in firefox is somehow a lot worse in wayland mode than in xwayland mode.

1 Like

@Adrian_Joachim Did you apply the eep-precores patch which didn’t make the rc3 merge window. There is a v11 which applies cleanly now with the other pstate fixups?

https://patchwork.kernel.org/project/linux-kselftest/patch/20231129065437.290183-8-li.meng@amd.com/

BTW the powerstat tests above didn’t include the epp pref core patches.And wasn’t particularily optimized (it was actually absed on the sentry-fsync kernel ) It just happened to be the kernel I had installed at the time to show TLP vs PPD differences are negligible.

1 Like

Gotta be honest I have no idea how to apply a kernel patch. It’s just bone stock 6.7rc3.

Yeah I think it’s pretty well established now that the settings matter more than the thing that sets them.

Apart from the hardware decoder thing the power usage looks pretty good already at this point.

I just can’t believe the hardware decoder in a 14nm intel chip from 2017 is somehow almost twice as efficient as one on a 4nm tsmc chip from 2023, so there must be some software issue there.

2 Likes

For reference the current rawhide kernel 6.7-rc3 WITH epp-prefcore patches + pstate fast notify fix + cros_ec_lpc applied on the same userspace FC39 and tuning as above tests - is significantly better for me:

3 Likes

Dell has 1165g7 and LPDDR4x RAM (16GB)
FW has 7840u and the Crucial 2x16 DDR5 kit
Brightness set to 30% on both (they both have the same peak brightness, so levels should be similar)
Both are on battery. The FW has a larger 61Wh battery, the Dell a 50Wh one.
The difference is quite striking, with the Dell predicted to outlast the AMD by a good margin, while being at 47% charge VS 74 of the AMD.
Notice the comparatively crazy high temps on the AMD (over 50 VS below 40).
OS is the same, same level of updates. Dell is even running Vorta + Teams in the background (I forgot at the time of testing), which the FW isn’t.

Also, with intel I used to be able to confirm it was using HW accelerated decoding using intel_gpu_top…what’s the equivalent for AMD?


The situation on AMD needs to improve, or all the noise about the better node process and efficiency is gonna stay a rumor in my book.

1 Like

I haven’t used Fedora in a long time, and did so briefly out of curiosity, but it shouldn’t really matter. The KDE spin is basically Fedora with KDE Plasma as the default DE with the relevant packages installed.

It’s the same concept as Ubuntu vs Kubuntu - the latter is exactly the same as the former with a change in the default package set for the installer. You can turn one into the other by manually [un]installing the relevant packages yourself.

Other distributions, such as Debian, instead offer you the choice at the point of installation.

As with most things, it’s a matter of preference. I don’t think you can make a universal claim for either of these.

Plasma offers a lot more customisation options bundled into the DE itself instead of relying on “plugins”. It has a more “traditional” UX in terms of look and feel and is arguably far more flexible for customising it to your needs.

Gnome, on the other hand, uses a slightly different user design philosophy and focuses on a predefined and, arguably, simplified and streamlined user experience. This means you may have to rely on additional plugins to achieve the customisation you might want if the functionality you require is not provided. Gnome has more of an Apple-esque approach to “our way or the highway” which is okay for the people who like the design philosophy, but perhaps not suitable for those that prefer to have more control over their UX setup.

There’s very strong feelings on both sides as to which is “superior” (depending on who you ask) but In any case, you can’t go wrong with either - try both, maybe in a virtual machine if you prefer, and see which one you like more.

This is a very good point. There should be no such thing as a particular desktop environment is “not supported”. Either a whole distro is “approved” or none of it is. Gnome should have no higher precedence over KDE Plasma and vice versa. Installing Plasma alongside Gnome or vice versa serves no purpose other than adding unnecessary bloat to your system and perhaps even worse - potentially cause conflicts between the two DEs background services such as those in charge of power management policies.

You would be correct that the “core” settings of a distribution would be the same. However, different desktop environments would have different background services (e.g. power management as mentioned) where there may be differences in default policies. This means that under default configuration things like power consumption could be higher in one DE vs the other.

1 Like

Here’s a vendor agnostic tool for this purpose.

1 Like

Thanks Mario,

it turns out Firefox is NOT using HW accelerated decoding after all (unless it’s that 2%…)

You need to opt Firefox into it in the about:config.

1 Like

What are the test conditions here?

Yeah I think something is fucky there, for 720 youtube playback mine uses signifficantly less, with and without hw decoding (actually even less without hw decoding cause the hw decoder does seem to have excessive power use at this point). it might be teams being teams or something though.

You can use amdgpu_top to see if it uses hw decoding.

Also boy I had to do a tripple take, before I saw powertop I though you were on windows XD

Yep 2% about matches 720p youtube playback. without hw decoding it’s a flat 0.

But even if it wasn’t, 720p youtube playback with software decoding uses only a bit over 3W above idle.