There’s an intel-microcode Debian package that you might want to try and see if that makes any diffference. You might try the GNOME Power Manager to do some historical charting to see if you can spot anything going on along with htop or something like that.
FWIW, running Arch w/ 5.18.16 on a 1260P, I’ve had no issues with unexpected power usage, idle power consumption, or with system responsiveness so far.
Thanks folks, I’ve tried 5.19-trunk from Debian’s experimental repo with the same results and do have intel-microcode installed.
As it turns out the issue is related to i3wm and presumably lack of compositing. If I open FF/Chrome in fullscreen, then there is no lags and scrolling is real smooth; drawback is rendering artefacts – a bunch of pixels throught the screen is stuck and isn’t updated. Enabling / disabling VSYNC has some effect, but artefacts still remain. I’ve tried switching to UXA rendering in Xorg, but Xorg doesn’t start failing on assert about buffer size. Running a compositor in addition to i3 doen’t make any difference. When FF is started in KDE, there is no lag, but there are some insignificant artefacts that later disappear.
So, I’ve moved to Wayland + sway and the problem is solved. No lags, no artefacts, works smoothly. Battery drain is still pretty high: 7-8W when idle and FF running, but now it falls below 9W and GPU spends most of the time in RC6.
I have also just received my Gen 12 framework an see approx. 10W idle usage on debian. I manually added the 2 firmware files and made sure they are loaded properly, but still >8W power consumption. Removing HDMI and USB-A takes me to 6.2W, which is still more than I expect (50% display brightness, Wifi connected).
I had read somewhere that disconnecting the modules helps.
Probably more useful in the case where your laptop is switched off though. Although if you are so inclined as to do it while it’s switched on, go right ahead.
Are you really using Sid? Why? I had problems getting Bullseye to work so I went to Bookworm. How are you measuring your power consumption? powertop output when on battery or are you using a “physical” power meter?
One thing that could be a big savings is that Wayland might be more power efficient than X. I haven’t tested on Intel systems (maybe someone wants to test/report) or what idle differences look like, but on a mixed workload on a Ryzen 5000 Ubuntu 21.10 GNOME session, Phoronix measured an average 3W power consumption difference last year: GNOME's Wayland Session Shows Potential For Better Battery Life Than With X.Org - Phoronix
The way I’d approach it is to do a sanity check with powertop’s idle stats - if I leave my laptop alone w/ just powertop in a terminal, it gets to about 75% in the C10 state. If you’re not having significant (or any) C10 states, you should try to figure out why. powertop will tell you events/s which is useful, but also just ordering htop processes by time will probably also show you if there’s anything keeping the processor busy.
There are probably some other tips within this thread as things to test out: Linux battery life tuning
(For example, someone there mentioned replacing PulseAudio w/ pipewire dropped a watt of power usage).
Has anyone tried disabling cores yet? The power of 12th gen P series honestly seems overkill for normal use. Only time it would be useful for me is rust compilation. I wonder if they should have offered U series chips either instead of or in addition to the P series chips.
You can’t disable all of the E or P cores, you can reduce them to only 1 at a minimum. I have never tried running with 1 P core, I disable the E cores more often.
My understanding is that with the E cores because they come in sets of 4 you’ll only notice a substantial power difference if you go down to 4 or 0 E cores. But from what you’re saying you can’t go down to 0. So does going down to let’s say 4 E Cores and 2 P Cores give big power consumption gains?
I would guess in real world use it would be over half as fast because most of the time you’d struggle to utilise all 12 cores anyway.
Also wondering about 8 E Cores and 1 or 2 P Cores to make it more similar to the U series.
I was very surprised by this as well considering desktop boards can disable all of them.
Didn’t really try it as most of the time when I was doing troubleshooting when I see temps running up or some stuttering in certain programs, I would go to the BIOS and disable the E cores as much as I can and see it helped performance. Was plugged in when those things happened.
Then this would be weird because you can choose 8 E cores all the way to 1 E core active, not in groups of 8.
4 e-cores form a cluster, that has some resources / logic that is shared between the cores.
So when you are only running 1 e-core, the shared logic for that cluster is still active and consumes power.
It will consume less power when disabling a single e-core, but disabling the whole cluster should reduce power draw by 4x (e-cores) + y (shared logic).
It’s also possible to shutdown all but the first core without reboot with echo 0 | tee /sys/devices/system/cpu/cpu*/online (or do it more selectively if you like). I’m not sure if it completely cuts off power or not. I don’t see any obvious improvement with a single core left online.
I’ve done some experimentation with this as well and also didn’t notice idle power use improve significantly by disabling P cores - basically within margin of error
Yes. I did a few better experiments right now:
3.5W – raw console, idle, 1p or 20 cores, with or without wifi,
4.06W – wayland, idle, 20 cores or 1p+4e cores
4.01W – wayland, idle, 1p core,
5.1W – wayland, firefox (~20 tabs), idle, 1 or 20 cores.
In all cases I did minimal interactions with the system and waited enough for measurements to become stable. Brightness is down to the minimum.
There is 0.05W difference between wayland in idle and I think it’s significant, but that’s really too little to be of any importance. As soon I’m starting to do anything, power drain goes to at least 7-8W.
It’s quite interesting what the difference would online cores make when doing some real-world work, but it’s hard to benchmark. I was typing this post on all cores enabled – 7-8W. When I switched to 1p core, it became 10W. Swithed back to 20 cores – 8W, switched back to 1p core – still 8.5W. 1p + 4e – 7.5W. So nothing definitive.
Intuitively I would think because you’re paying the power overhead for having all of the CPU cores (only a small difference between having all cores on and 1 core on idle as you demonstrated) it’s probably more efficient to utilise all of them because you finish your tasks faster, and you don’t need to raise clockspeed as much compared to if you had to massively ramp up one core. This would probably become more apparent on an intensive task. I should mention I’m saying all of this with 0 authority and proof xD. I hope someone who knows what they’re talking about finds this thread and gives their opinion.
I installed Debian testing with xfce today and noticed that the i915 errors are missing from the logs. Can anyone confirm that adding the firmware blobs is no longer required in Debian testing?