If it makes life easier and better for customers and support guys then I’m all for it.
But this is a corporation we are talking about here…
(I havent watched the clip as I’m on hols)
We already have a standard. Why we would need another standard for the same thing?
Did you watch the video? It enables better performance at the higher capacities as it reduces the length of the traces, improving stability. I actually am cautiously optimistic about it. Dell submitted it to JEDEC so it might actually become a real standard. It looks as simple to replace as SO-DIMMS are so it doesn’t reduce repairability.
I’m always on board for innovation, and if the industry decides that this is a good way forward I’m glad that Framework will be able to embrace the new tech easily and that my current laptop could take advantage of it with a new mainboard. Knowing that SODIMM has been around for 25 years, seeing a new RAM module that allows for thinner computers without requiring manufacturers to solder the RAM is great for repairability!
The new standard having been worked on by JDEC & DELL and opening up to laptop manufacturers everywhere, supports DIMM to CAMM adapters too and helps save materials/Pin Connections & Space.
What do you guys think?
Because I think it would help increase the options available to DIY Builders and people looking to find less-orthodox uses for their mainboards in this form factor
I dislike the fact that you have to replace the entire module to add more memory as opposed to popping in an extra DIMM, also DELL appears to have royalties on it? Otherwise seems like a good thing that framework should move to!
@Shiroudan Dell does that for now, but that is changing.
As for changing the whole module, you could use the adapter version and then just add more memory to that, which could be a compromise? There were some potential performance improvements alongside the space saving, and cost savings.
Besides It’s not like people mix and match so much post-build, but I know some might add more memory down the road.
Well it seems LTT is now covering the topic
This seems like a cool change, I wonder if future desktop DDR can do the same too. But requiring to toss out the old module is sad.
Having watched the LTT video is seems like a nice idea but it’s really yet another ‘law of diminishing returns’ as fas as I’m concerned.
We have all this wonderful performance hardware but until lazy and incompetent developers and coders actually make their software make proper efficient use of it all, then what’s the point?
Fed up of buying software that just crawls alone using 2 cores and sipping ram like its 2004.
@Jason_Dagless - that is a bit…aggressive. You’re no doubt correct that certain optimizations can be made, but I don’t believe that the general problem is as simple as you make it out to be.
I’m just speaking as a consumer and customer of high-end hardware and computational software/ Of which the software cannot make use of said high-end hardware other than a half-hearted 30% usage etc.
You buy your 16-32 threads CPUs and latest RAM and GPU and your image enhancement software can only rip through a batch using 20% or either CPU or GPU. I have a 2014 based PC that modern software from the likes of say Topaz that can only use a fraction of what it can do.
The developers go “oh just run several batches together!”
No! I just want to run one batch really fast, using all the power I paid for.
As a consumer I say try harder guys! Fewer ‘new features’ and just deliver better performance of what we already have!