Unless you game or do some heavy computational task you will notice no difference between those RAMs.
At least igpus are definitely bandwidth starved and the latency difference between 4800 and 5600 is very small so the 5600 is kind of a no-brainer on the performance end.
I am however curious if it is worth going with slower ram for more battery life but that is something we can only really judge once they are out.
Side note: I’m looking at the specs of these modules and once again can’t understand what the (standard) JEDEC timings for them are as opposed to the (Intel-specific) XMP or (AMD-specific but not Framework-supported) EXPO timings listed front and center. The only non-XMP 5600 MT/s kits I’ve seen are CL46-45-45 or similar instead of CL40-40-40 for the XMP ones, yet Framework has generally suggested to avoid XMP:
Regarding the actual question, an additional $10 for 5600 MT/s seems like a no-brainer from the performance point of view to me. I don’t know how much influence the higher clock speed would have on power consumption, though, so if anyone could comment on that I’d appreciate it as well as the OP.
This is the 5600 MT/s kit I have in mind
For some bizarre reason this kit’s XMP/EXPO timings are identical to its SPD timings. Meaning even if the laptop does not support XMP/EXPO, the laptop should pull the identical SPD data and use that. I’m not sure what happens if the laptop cannot run the SPD data (such as pairing DDR5-5600 CL46 with the 6800U that only supports DDR5-4800), but the 7040U CPUs do support DDR5-5600 and CL40-40-40-89 is a known JEDEC profile, if I recall correctly.
It’s actually somewhat common with DDR5 to have XMP/EXPO identical to SPD. I’ve seen it on plenty of kits (especially ones above 4800 MT/s).
If you pair DDR5-5600 CL46 with a CPU that only officially supports DDR5-4800 then the ram will automatically be reduced to DDR5-4800 CL40.
However if the ram also has an XMP/EXPO profile (even at the same speed as the SPD data) and the motherboard supports XMP/EXPO, then the user can just enable XMP/EXPO to cause the ram to run at full speed.
So many DDR5 kits with fast SPD profiles also have XMP/EXPO at the same speed in order to get around limitations of what SPD speed CPUs will accept.
that’s what I remembered too.
There was a thread 2 years ago where someone actually tested battery consumption with different ram frequencies.
It indicates that there could be a significant enough impact to consider lower frequency ram if you don’t need the performance.
But note that this was a small and limited test on DDR4
https://community.frame.work/t/battery-life-impact-of-ram-memory-configuration-extra-data/6426
Taking a 50% hit on memory bandwidth for like 0.3W less idle consumption does not sound that reasonable.
I am very curious how that works your on the amd though I am sure someone will try eventually.
Also, How bad is it running single channel memory?
I am thinking of getting a 32 GB stick instead, so later on I can insert another 32 to make it 64 total…
AMD’s CPU/APU is more sensitive to tight timing, and if you want to use the iGPU in it for gaming, higher frequency and lowest possible timing can lead to better gaming performance
(note that XMP is not supported, you can only use JEDEC Class A timing products).
You are cutting your memory bandwidth in half which will kneecap the igpu quite a bit.
That may be fine for your application or you could do 32+8 to get a bit of dual channel for the igpu.
Gotta disagree with you there, the igpu use-case is a lot more bandwidth sensitive than it is latency sensitive. Also since you are limited to jdec the range of timings you can have is quite narrow anyway.
I thought it only ran in dual channel if you had the same capacity in both slots. Can you provide your source on this? I’d like to read up on how it works.
Intel calls it flex mode. I’m not sure what AMD calls it but it works on my 3700x so AMD supports it (gets almost as much bandwidth as two equally sized modules).
Yeah initially it was called flex mode but seems to be a standard memory controller feature for quite a while.
My 32+8 setup idea falls apart once you look at the prices for 8gb dimms, might as well just get 2x32 at that point.
I am personally torn between just getting 2x16 now and then upgrading to >32gb dimms once those come out in jdec cl40 and became somewhat affordable or just got 2x32 right of the gate.
Given it’s just about a 100$ difference and 64gb is going to be enough for quite a long time I am biased towards option 2 right now even if those non binary dimms do look fun.
Interesting, thanks for the clarification. How are you verifying the bandwidth? Is the report in memtest86 good enough to use? Thinking maybe I should add one 8g back in my wife’s laptop that currently has one 32g in it.
Looks like someone has confirmed the G.SKILL 64GB (2x32) – 5600 CL40 – F5-5600S4040A32GX2-RS kit working!
https://www.reddit.com/r/framework/comments/173wa6q/ryzen_7840u_arrived_gskill_is_working/
I’m thinking of picking that up for my upcoming 7840U (batch 3), and also rerunning the battery life tests for various RAM configurations.
Curious on the power draw difference mainly between that, the equivalent Kingston kits, and Crucial’s 5600 CL46 kits (specifically that 96GB 2x48GB kit). Especially since Crucial uses Micron chips.
I found an inexpensive 2x8GB SK Hynix DDR5 4800MHz kit on Craiglist as well for $30 that’d be interesting to compare.
We could possibly pool together some kits (or also mail them to me for a week if comfortable) and I’d be glad to run the numbers (also happy to mail mine to someone). Thought about just buying a bunch of kits off Amazon, testing, and returning, but I’m a bit iffy on the ethicality of that.
For now, I’m leaning towards grabbing the G.SKILL 2x32GB 5600 CL40 set (which uses SK Hynix) which I plan to daily drive and keep, and compare it with an SK Hynix 2x8GB 4800 (latency currently unknown) set. That should theoretically give some insight into power draw between 16GB to 64GB mixed with 4800MT/s to 5600MT/s.
AFAIK these are all the important variables:
- Size
- Frequency
- Latency
- Chips manufacturer
- Micron (Crucial)
- Samsung
- SK Hynix
- Rank
- Single
- Dual
Still deciding, so I’m putting out the info/option/feeler!
Uhh y think? You can, however, resell them to people here or on eBay. That would be the ethical thing to do.
Honestly, how wide of a range to they effect battery and how do you actually test for that without a massive margin of error? They all run at the same voltage and I can’t imagine the current draw being that different between them.
Huh? If I’m understanding you correctly…that’s the same as returning it back to Amazon or Micro Center and them selling them open-box. It’s well within a consumer’s right to try out various products. It’s also covered under their return policy. Of course, there’s a wide range in “trying”, which is where the ethicality comes into play. But I’m talking about trying with care and knowledge, without abuse.
It’s basically equivalent to someone wanting to make smoothies. They buy 3 blenders, try 3, turns out 1 of them crushes ice way better than the others. So they return the other 2.
Same here. Buy 3 ram kits, turns out 1 of them is the most power efficient (or fastest or whatever). Return the other 2.
edit: I guess I answered my own question, lol
Anyways, to answer your other question:
See the prior post in this thread for a summary:
And my thread for more info. Note, that was early i7-1165G7 with DDR4. In my (extremely limited) testing at that time, there was about 0.75W in variance.
As an example, with the new 61WH battery:
-
If your system was drawing 8W on average, that equals battery life of:
- 61 / 8 = 7.625 hours
-
If your system was drawing the extra 0.75W, so 8.75W total, that equals battery life of:
- 61 / 8.75 ~= 6.971 hours
That’s more than half an hour of battery life difference. The timing will be more or less depending on your average power draw.
For example, if your system was in S0 sleep drawing an average of 1W, it’d last 61 hours. With the extra 0.75W, so 1.75W, that’s only ~34.86 hours. That’s 26.14 hours less.
Conversely, if you were e.g. playing power intensive games drawing an average of 18W, it’d last ~3.39 hours. The extra 0.75W at 18.75W equates to ~3.25 hours, so only 0.14 hours less.
With DDR5 on this new AMD platform, we don’t really know, but we can using that same method: get an adequate sample size of deltas by measuring the power draw differences controlling for the difference of only RAM sticks. Under a barebones Linux/Windows installation that has been proven to be consistent in idle power draw. It’s not perfect, but it’s pretty close. This is the same method I used to (accidentally!) discover the issue some expansion cards had with extra power draw. It (roughly, but adequately) works for other hardware components, and also software.
There are many other factors besides voltage that can affect the power draw, as outlined above.
I’m talking with a local person for their SK Hynix 2x8GB DDR5 4800MHz kit, agreed for $20 (good ol’ Craigslist!). If it goes well, I’ll have at least that to compare with
Edit: if anyone’s in Austin, TX and wants to meet up to do some in person RAM testing, lmk!
PSA (stealing @Todd_Freeman’s idea) — at least three people in Batch 1 seem to be having problems with lower-frequency (5200MT/s) RAM:
ETA On the other hand, here’s a report of a working 4800MT/s module: