Introducing the new Framework Laptop 13 with Intel Core Ultra Series 1 processors

Maybe a bit of a dumb question: how’s Linux thread scheduling on P-core/E-core systems? I got in with the 11th gen, so I’ve been on the last generation before the big move to that design since I got the framework. It gives me reservations about upgrading too early and causing a regression in performance or battery life, especially given what I’ve heard about Alder Lake-onwards Linux scheduling on places like Reddit. But if my fear is unjustified, then I’ll probably plan on upgrading my board to the core ultra on release.

Oh, and can either of those concerns be mitigated by custom kernels? I use the cachyos-bore kernel which has an EEVDF scheduler right now. I’m not sure how that would translate over to the new Intel chip designs, or incorporate core pinning/parking.

Or if it’s not a perfect alignment with the gnome panel, I’m going to spend days tweaking padding and border radius so that my OCD doesn’t get driven up a wall… :stuck_out_tongue:

Official support was added with Linux 5.18, I’m not sure if newer kernels improved on this even further. Also, these Ultra chips have 3 different types of cores, so I’m not sure how well this is supported yet.

Apparently there were at least some performance improvements with Linux 6.9 + Meteor Lake, see:

For the ultra we got P, E and even more E cores (these ones may actually mean energy efficient instead of the space efficient of the regular e-cores) now just in case they got the P/E split sorted since XD.

The two separate E-cores (called LP E-cores) only consume slightly less power than the normal E-cores.

The big power savings come from the fact that the LP E-cores have separate power management from the rest of the cores. This means that when running a very light workload it can completely shut off power to all the other cores and run exclusively on the LP E-cores.

Unfortunately many common workloads are too intense and trigger power to be reactivated to all the other cores, crippling battery life. (Which is why Intel vs AMD battery life currently is highly situational, in loads light enough to run exclusively on the LP E-cores Intel usually wins but as soon as the rest of the cores gain power Intel falls behind)

Lunar Lake (Intel’s new family of CPUs) gets rid of the distinction between E-cores and LP E-cores and puts all 4 of them on the separate power management (meaning that they are effectively all LP E-cores). They also have a new architecture that Intel says allows the new E-cores to be up to 70% faster than the old LP E-cores at the same power consumption. Overall dramatically increasing the load the CPU can handle while power is shut off to the P-cores.

Although Lunar Lake is targeted at lower performance/lower power laptops than Framework sells and doesn’t support replaceable memory, so we probably won’t Lunar Lake in Framework Laptops. But Arrow Lake (which is supposed to be the successor to Meteor Lake) should feature many of the same improvements as Lunar Lake while being suitable for Framework laptops.

It doesn’t support ddr but it could support lpcamm so it’s not entirely out of the race, though at that point zen5 based mobile cpus and probably even the second gen of the qualcom ones are also likely out.

Lunar Lake has on package memory. That means that the ram is soldered onto the CPU (right next to the die) from Intel.

Edit: AMD’s Strix Point Zen5 mobile CPUs were previously rumored require LPDDR5X, perhaps you’re thinking of that?. Leaks closer to the launched and AMD’s website however both indicate regular DDR5 is supported by Strix Point. (Strix Halo might require LPDDR5X)

1 Like

Yeah that is one step to far for me as efficient as it may be. At least it doesn’t have on package storage jet XD.

Not sure, a lot of stuff got announced recently with a lot of vague requirements. But anyway with the availability of lpcamm I am all for using it as soon as practical, lpddr is just better outside of the having to be soldered on (and cost) bit and with that solved full steam ahead.

1 Like

I doubt it even helps that much.

According to Micron, LPCAMM2 at 6400 MT/s consumes under 150 mW per 2 GB chip under full load and under 20 mW while active but not under load. Assuming 32 GB that would be 2.4 W under full load and 0.32 W when active but not under load.

With many computer components power draw increases mostly linearly with frequency as long as voltage is constant. Assuming that applies to ram then the 8500 MT/s that Intel is using would be around 3.2 W under full load in LPCAMM2 form factor. The 0.32 W number wouldn’t change as laptop CPUs reduce the ram frequency when not under heavy load.

Intel claims a 40% savings by using on package ram, which would mean the reduction should be 1.28 W under full load and 0.128 W when active but not under load.

That means that when active but not under load the on package memory only reduces battery drain by ~0.2% per hour. When the ram is under full load the savings increase to ~2% per hour, however any load that runs the ram at 100% probably also runs the CPU and other components at high enough power that the ram power draw is still not meaningful by comparison.

I hesitate to say that LPDDR is “just better”.

DDR5 has better latency (time to transfer a single piece of data), more flexibility (customers can purchase one module now and add another later to expand), and currently is much more common to find in replaceable form factors (although CAMM2 makes replaceable LPDDR5X a thing).

LPDDR5X has higher bandwidth (quantity of data transferred in a specific amount of time) and lower power draw.

In general bandwidth has a major impact on GPU performance while latency has a small to moderate impact on CPU performance.

In a thin and light laptop like the Framework Laptop 13 where battery life is a high priority and the only GPU options are iGPUs that share the main system memory I agree that LPCAMM2 LPDDR5X is the better choice.

However in a bigger and more performance focused laptop like the Framework Laptop 16 that offers a separate dGPU (which has its own much higher bandwidth GDDR6 built in) I can see a compelling argument to stick with DDR5 for its lower latency and greater flexibility. On the other hand more battery life is nice (even if it is a lower priority) and the dGPU is optional (and the iGPU benefits a lot from LPDDR5X) so it is hard to say which is the better choice.

You got a souce for that claim?

If the latency thing was actually true that would be an argument but in most workloads more bandwidth trumps latency for actual performance, especially on chips loaded with cache.

The flexibility thing is only really relevant for the time period it takes for lpcamm to catch on (or not I guess).

And has worse iGPU than lunar lake so it’s dead to me. AMD will be an option tho so I’ll just go AMD potentially.

Most benchmarks I’ve seen show DDR5 at around 90-100 ns latency and LPDDR5X-7500 at around 110-130 ns with the AMD 7040/8040 series memory controller.

Here are several examples that I picked randomly (well the Framework laptops were deliberate) from the NotebookCheck website (to the best of my knowledge all the devices listed here have the same AMD memory controller):

SODIMM also has the advantage of 2 slots being typical (vs 1 with LPCAMM). That opens up the flexibility to only fill one slot initially and fill the other later (not great for dual channel reasons, but DDR5-5600 is fast enough to not bottleneck a CPU too badly so it is reasonable to do in systems with a dGPU).

1 Like

Interesting, looks like this may be the the fabric having to run decoupled because of the higher frequency.

Valid point but also a bit of an edge case. Getting replaceable memory is worth using slower more power hungry memory, being able to replace it in smaller increments/cheaper a lot less so imo.

On my Zen2 desktop the latency penalty of the fabric being decoupled is nowhere near this large, more like 10 ns at worst. This is Zen4 mobile so I’d expect the penalty to be even less.

AMD has shared that with Zen4 they further reduced the penalty of being decoupled (because being decoupled is very common with how fast DDR5 is). With mobile AMD puts the memory controller in the same chip as their cores, whereas on desktop those are different chips (which results in the fabric mattering a lot more as it carries the data between the cores and memory controller, carrying between chips makes the system more impacted by fabric speeds).

I’ve also seen similar results on Intel CPUs, although their newer chips have aggressive power management that cripples latency when under low load which makes it harder to collect good data.

IIRC back with LPDDR4X vs DDR4 there was a similar latency difference and that was before ram had gotten so fast that fabric coupling and memory controller speeds couldn’t keep up.

Edit: Regardless of the reason why (I think it’s the LPDDR5X standard not having as good latency), LPDDR5X has significantly worse latency than DDR5.

TL;DR soldered RAM bad

You’ll get better battery life when suspended with LPDDR. When not suspended, you’ll mostly be worried about other power draws (screen, graphics, CPU).

A lot more bandwidth for a lot less power draw at apparently a bit worse latencies does still look like a pretty good deal. There is also even faster lpddr5x in the pipeline that may narrow that latency gap somewhat.