Hi everyone! I’m experiencing really poor power consumption on my Framework 13 with a 7840U. I normally idle about 10W. If I get everything optimal (idle, lowest brightness possible, no wifi/bt, no expansion cards), it goes down to about 6.5W which is worse than most people here report (<4W). I am running Debian Testing (Kernel 6.5, amd_pstate=active, powersave governor) with TLP on KDE. Do you have any suggestions?
One hint is that amdgpu_pm_info reports 5-6W GPU consumption all the time… Is that GPU power or total SoC package power? It seems like a lot.
I also have this if I’m reading amdgpu_top right, as it shows GPU Power at 5-6W at idle (my total idle from battop is 7-8W). I’m running Endeavouros (arch) with no real modifications.
I was getting around 6w “idle” until I switched to a hardware-accelerated terminal program. So my idle state wasn’t very idle; I had been using BlackBox and it was hitting my CPU pretty hard just to draw numbers on the screen. Kitty is way better in my experience.
If you watch “cpupower monitor” for a bit, what freq/c-states are you seeing?
Are you using generic kernel or OEM? I have better results with OEM.
Hi @harryjph, this does seem high. I’ve found NoteBookCheck’s results repeatable for other laptops and would expect for you to see a similar 4-6.8W idle.
Are you disabling wifi or simply not connected? Is autosuspend enabled for usb devices?
Have you looked at powertop or a similar tool to evaluate what may be consuming power?
What does top show in terms of processes keeping your cpu active?
Which kernel are you running?
While observing idle power consumption is helpful to ensure parasitic drain has been resolved, it doesn’t provide much insight into usable run times.
If that is of interest to you, I encourage applying the testing methodology I defined above. Sharing your results here may also help others in the community test & tune their systems.
My observation is this often reads within 0.5W as powerstat or powertop during light use and about 3-5W lower at max use, so I’m thinking it approximates total pkg power. This is based on running a benchmark showing 30-42W. GPU load was 0-1%.
Watching Netflix with & without hw accel in Firefox. It looks like hw accel is advantageous as the gpu uses less power than cpu based rendering (thx AMD ).
Fedora is already using pipewire. Migrating Ubuntu to that seemed good for an avg power reduction of 0.5-1W. Streaming for 85 mins took 17% @ 6.6W avg. Est endurance is over 8 hrs.
Use case: benchmarking.
Running Furmark windowed at 1280 x 1024 and openssl RSA4096 concurrently. Maxes gpu & cpu load. 35 mins took 25% @ 23.6W avg. Est endurance 2:20.
Config: PPD balanced, EPP balance_power, usb autosuspend
Currently using 6.5.0-1008-oem kernel. Looking at updates in 6.6, interested in retesting consumption. Also considering @David_Markey’s 6.7 kernel with fixups as @jwp has mentioned or running Fedora rawhide or Ubuntu mainline for additional testing.
Use case: large file downloads. powertop showed as much as 6.2W draw from the wifi card. Over time 5g throughput would taper off by as much as 50%. 2.4g was consistent, albeit expectedly slower. Seemed unaffected by status of power management via iwconfig
HW acceeleration using too much power does not mean cpu decoding uses less (though it does at 720p and barely at 1080p 30). Hw acceleration works and can handle 8k 60 without problems.
Hardware decoding a 720p 30 video on the framework uses about 4-4.5W above idle an 1080p 60 uses about 7.5 above idle. That is way more than it should (hell my 8th gen intel uses about half for the same videos). Especially with people reporting barely 1w above idle for 720 on windows.
These are higher than I observed for Netflix on Firefox, which looks to be streamed in vp09 format, however due to DRM passes through the WideVineCdm plugin. I’ll test again to double-check resolution and framerate.
The video part of my testing battery is 720p 30fps youtube video in firefox, 4k 60fps youtube in firefox and 720 30 and 4k 60 in kodi from local disk but firefox and kodi are within half a watt there (and h264, h265, vp9 and av1 don’t seem to make a significant difference, neither does bit-rate, at least in the hw accelerated case, with software decoding it makes a big difference).
My conclusion is something is wrong with the hw acceleration drivers, the decoders work fine and do decode whatever I thow at it but they use way too much power doing so. The windows numbers indicate the hardware is fine. If I’d got 1W over idle for 720p decoding on linux I’d be over the moon, other than the hw decoder the power consumption looks relatively fine and the performance per watt is freaking amazing.
I’m on Ryzen 6000 series and not yet using a framework but I also wish this would get even just relatively close to windows, on Linux it consumes about 17w(no config except auto-cpufreq) yet on windows it only consumes 8w
With va-api hw accel, 31 mins used 7% battery avg 7.45W draw.
Without hw accel, 31 mins used 7.5% batt at 7.98W avg draw.
Config for testing: ppd balanced, epp balance_power, usb autosuspend, Xfce screen scaling at 0.8, sound muted, brightness at first bar, a few terminals open, browser closed, wifi on & connected, bt off, camera & mic off via hw switches.
I’ve been encouraging observation over 10% or more battery usage for the sake of comparative data. In this case, it looks like the video would need to loop 4-5X.
In my experience, looking at ‘instantaneous’ readings are often misleading (ie: they don’t often/necessarily correlate to actual run times).
Note: after installing mpv, it exhibited some artifacts and stuttering, so I uninstalled it and used vlc, the player I’d previously tested on the FW13.
Conclusions:
the differential between hw accel being on/off via Netflix on FF and a local vid are similar
mpv may have issues (at least on Ubuntu 22.04.3)
as @Michael_Wu notes in his thread linked above to the video file, the Fedora freeworld drivers may have higher power drain
To clarify, my observations are ~1W differential with hw accel on (more efficient) or off (less efficient). Not vs idle usage.
At very least, my suggestion is to:
a) ensure amd_pstate is active and set it to balance_power or power
b) if running PPD, set to power-saver or balanced
c) enable usb autosuspend if it isn’t already
It’s not quite that bad on my framework, worst case is the 8k 60 youtube video and that maxes out at around 13W
Well you did test at pretty much the point where the curves meet, below that sw decoding wins (not by much but it does), above it sw decoding looses massively.
Also you can use powerstat to get better readings than messing with battery percentages. in my testing I let it settle for a couple minutes then run powerstat for 8 minutes.
10% sounds excessive unless you have some very active background tasks or something. I have found after a couple minutes it settles.
Well you did test at the one point where they are relatively equal, retest with 720 and 4k and see a quite different story.
Don’t think there is anything specifically wrong with mpv
that’s probably more settings than anything else. There are some you can do to make a 1-2w difference in hw decode but it’s still way too much.
Yeah the windows observations are 1W vs not decoding at all. On my old t480s hw decoding a 720p 30 video takes a bit over 2w over idle which is much more reasonable than the 4-5 the much newer phoenix chip seems to use at this point.
My default pstate was performance. Changing pstate to power (echo "power" | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/energy_performance_preference ) drops me down ~0.7W idling. I’m still a far cry from idling at 3W or whatever, but it’s still an improvement.
What is confusing though: why is PPD not making this change for me? I haven’t used linux on a laptop in ~10 years so I’m super out of touch, and the documentation is confusing to me. It reads like it would as a fall back, but because my laptop supports the “platform profile” (whatever that is) it uses that instead. I don’t know what platform profile does, but what it definitely doesn’t do is change pstates.
Powerstat/powertop are polling on an interval, so they show approximate draw.
It should be more statistically accurate to use a timer and % battery use to calculate avg draw
I also let things settle in for a few mins before testing.
Then for whatever change I’m observing (different EPP setting or video draw), my target is 10% battery use or more as that will help smooth out any spikes to provide accurate average draw. Timer starts when the battery drops 1%. Then the timer stops when the battery changes to the target percentage.
This also lends itself well to extrapolating run time for the whole battery to evaluate specific use cases. Such as: how long can I watch a video offline? Could be useful for a long flight. Or in the case of this thread, can I work offsite without an outlet for a whole day?
I’ve tested & shared my results for Netflix and a local HD video which covers my use cases. Feel free to perform further testing & share your results. I encourage the above methodology for the reasons indicated if you’d like comparative data.
Good to know. Based on others observations and my own, it looks like multiple commits in the 6.7 kernel should help us continue to move towards parity.
Unsurprising given recent developments in AMD load scheduling et al.