I have a pretty standardized test plan at this point, especially since I am testing something with enough external variables it is important to keep as many variables at possible under control. Same youtube videos, same local files and so on.
Well good thing a large portion of my test list is video playback XD
@Adrian_Joachim Did you apply the eep-precores patch which didnāt make the rc3 merge window. There is a v11 which applies cleanly now with the other pstate fixups?
BTW the powerstat tests above didnāt include the epp pref core patches.And wasnāt particularily optimized (it was actually absed on the sentry-fsync kernel ) It just happened to be the kernel I had installed at the time to show TLP vs PPD differences are negligible.
Gotta be honest I have no idea how to apply a kernel patch. Itās just bone stock 6.7rc3.
Yeah I think itās pretty well established now that the settings matter more than the thing that sets them.
Apart from the hardware decoder thing the power usage looks pretty good already at this point.
I just canāt believe the hardware decoder in a 14nm intel chip from 2017 is somehow almost twice as efficient as one on a 4nm tsmc chip from 2023, so there must be some software issue there.
For reference the current rawhide kernel 6.7-rc3 WITH epp-prefcore patches + pstate fast notify fix + cros_ec_lpc applied on the same userspace FC39 and tuning as above tests - is significantly better for me:
Dell has 1165g7 and LPDDR4x RAM (16GB)
FW has 7840u and the Crucial 2x16 DDR5 kit
Brightness set to 30% on both (they both have the same peak brightness, so levels should be similar)
Both are on battery. The FW has a larger 61Wh battery, the Dell a 50Wh one.
The difference is quite striking, with the Dell predicted to outlast the AMD by a good margin, while being at 47% charge VS 74 of the AMD.
Notice the comparatively crazy high temps on the AMD (over 50 VS below 40).
OS is the same, same level of updates. Dell is even running Vorta + Teams in the background (I forgot at the time of testing), which the FW isnāt.
Also, with intel I used to be able to confirm it was using HW accelerated decoding using intel_gpu_topā¦whatās the equivalent for AMD?
I havenāt used Fedora in a long time, and did so briefly out of curiosity, but it shouldnāt really matter. The KDE spin is basically Fedora with KDE Plasma as the default DE with the relevant packages installed.
Itās the same concept as Ubuntu vs Kubuntu - the latter is exactly the same as the former with a change in the default package set for the installer. You can turn one into the other by manually [un]installing the relevant packages yourself.
Other distributions, such as Debian, instead offer you the choice at the point of installation.
As with most things, itās a matter of preference. I donāt think you can make a universal claim for either of these.
Plasma offers a lot more customisation options bundled into the DE itself instead of relying on āpluginsā. It has a more ātraditionalā UX in terms of look and feel and is arguably far more flexible for customising it to your needs.
Gnome, on the other hand, uses a slightly different user design philosophy and focuses on a predefined and, arguably, simplified and streamlined user experience. This means you may have to rely on additional plugins to achieve the customisation you might want if the functionality you require is not provided. Gnome has more of an Apple-esque approach to āour way or the highwayā which is okay for the people who like the design philosophy, but perhaps not suitable for those that prefer to have more control over their UX setup.
Thereās very strong feelings on both sides as to which is āsuperiorā (depending on who you ask) but In any case, you canāt go wrong with either - try both, maybe in a virtual machine if you prefer, and see which one you like more.
This is a very good point. There should be no such thing as a particular desktop environment is ānot supportedā. Either a whole distro is āapprovedā or none of it is. Gnome should have no higher precedence over KDE Plasma and vice versa. Installing Plasma alongside Gnome or vice versa serves no purpose other than adding unnecessary bloat to your system and perhaps even worse - potentially cause conflicts between the two DEs background services such as those in charge of power management policies.
You would be correct that the ācoreā settings of a distribution would be the same. However, different desktop environments would have different background services (e.g. power management as mentioned) where there may be differences in default policies. This means that under default configuration things like power consumption could be higher in one DE vs the other.
Yeah I think something is fucky there, for 720 youtube playback mine uses signifficantly less, with and without hw decoding (actually even less without hw decoding cause the hw decoder does seem to have excessive power use at this point). it might be teams being teams or something though.
You can use amdgpu_top to see if it uses hw decoding.
Also boy I had to do a tripple take, before I saw powertop I though you were on windows XD
Yep 2% about matches 720p youtube playback. without hw decoding itās a flat 0.
But even if it wasnāt, 720p youtube playback with software decoding uses only a bit over 3W above idle.
What is your power absorption in similar conditions? I think Iāve tried pretty much everything I could easily try (short of changing kernels and stuff).
PPD vs tlp didnāt have a significant impact.
Iām willing to install a different distro recommended by someone with low power absorption, just to be able to make comparisons and understand if Iām facing HW or SW issues.
Iām also really tempted to ask FW if theyād accept an exchange of my board + wifi adapter for the similarly priced Intel one + AX card (I think itād be the i7-1360P).
But then again, what if both AMDās and Intelās current gen chips are not optimized for Linux yet, and I end up in the same situation again?
Would that be a mistake (assuming itās feasible at all)?
Iām stressing out a bit because I spent 1.6kEuros on this and in crucial areas (battery life, performance) itās similar or inferior to the outgoing XPS. My return window closes in less than a week.
I was looking for this, but ccould only find the instructions for Fedora here. Checking the corresponding settings (Compositing: WebRender, HARDWARE_VIDEO_DECODING: default available) make me think this should be enabledā¦?
For reference the current rawhide kernel 6.7-rc3 WITH epp-prefcore patches + pstate fast notify fix + cros_ec_lpc applied on the same userspace FC39 and tuning as above tests - is significantly better for me
@jwp, Iād like to give this a try. Iām fairly new to the Linux world but do I:
So what i got from skimming through this topic, Kernel version is the thing that does the most. I used kde neon as first try out on my newly arrived amd 13ā'ā¦apparently it runs with 6.2 which seems a tad outdated
@japsy ; one of the reasons I donāt want to post any 'how-toās as my own use patterns are NOT what I would ever consider a ānormalā users ; and what I do to my systems personally for hobbyist/play is NOT what I would ever want to support or recommend in any sort of professional setting. Iām happy to try and re-create issues/and chjime in when iāve noticed something with my play/experiments.
Having said that ; this patch applies cleanly to the fc39 6.6 kernel - and includes the current amd-prefcores patches/pstate fixes and uip timmer fixups as well as the EC cros lpc patch. The rtc-cmos ID fix is in mainline 6.6 so when it graduates from updates-testing to updates youāll get it).
11W firefox 4k 60fps youtube video (with hw accel)
16W firefox 4k 60fps youtube video (without hw accel)
So far I am pretty sure there is something wrong with the hw accelerator, it accelerates just fine, hell it does 8k 60 without breaking a sweat but something with the power management if probably borked rn.