Framework (2nd Gen) Event is live on February 25th

I’ll get started by saying I’m absolutely loving the idea of the Framework 12, I’ll probably pick one up as it will complement my Framework 16 perfectly as a proper portable. I couldn’t justify also owning the 13 because of how close they were in functionality but the 12 seems like a great companion.

That desktop though… I just can’t wrap my head around it. It’s essentially an ITX case and a locked down unpgradeable ITX board using mobile class hardware at full desktop pricing, what in any way is “Framework” about that. It doesn’t even have any proper expansion! Framework developed a whole new interface for the 16 expansion bay, why not at least throw that on! Especially as this has presumably led to work on any Framework 16 upgrades being pushed back. Surely the better option would have been to build the desktop around a 16 mainboard and focused on bringing the new SoCs to that. It would be no less expandable than the current iteration while also providing benefit to existing 16 owners.
Sidenote I am also disappointed by the constant AI discussion at this event, I appreciate that was probably an AMD talking point but still, considering framework’s documented anti-ai stance previously this just feels a bit disingenuous.

Overall a very mixed bag event, the regular 13 upgrade seems to be a nice iteration though I have to say I am pretty disappointed there was no mention of a framework 16 upgrade. The 7000 series 13 has already been replaced while the 7000 series 16 is untouched without even a hint that work is progressing on the 16.

4 Likes

The Desktop thing was weird. Doubt it will be a success.

Framework Laptop 12" are cool though.

2 Likes

Whoops, I missed that. I thought I read all the comments well enough but yours must have slipped past.

Edit - It seems this is the line you are talking about:

I’m not sure that you and I agree with that first sentiment. Otherwise we do seem to mostly align on the what some of the main driving factors were behind the desktop. I’m not sure we agree on the valence, though. It could be me just improperly reading your tone.

1 Like

My thoughts exactly!

I found the following video about the event!

The Framework Desktop Is An Interesting Take On SFF - PCWorld

1 Like

Sorry, when I first heard about it I was just venting. It doesn’t go with Framework’s philosophy so I didn’t like it.

1 Like

CAMM2 memory is faster than SODIMM memory but slower than soldered. Crucial sells a CAMM2 DDR5-7500 which is really fast

That being said, this APU being quad channel 256 bit does need two 128 bit CAMM2 modules. It’s a first to have a quad channel mobile APU. It’s plausible to me this makes signal integrity more of an issue compared to have a single CAMM2 module.

I do not know what the performance penalty would be on quad channel dual camm. IF the choice was CAMM2 DDR5-6400 vs Soldered DDR5-8000, I would choose camm every day of the week.

I got my FW13 originally with 16GB, then upgraded to 32GB. I find it disappointing that Framework Desktop does not have expandable memory. E.g. Buying the 64GB version, then later upgrading to 128GB or even 256GB when higher density CAMM modules become available. Or being able to replace faulty ram modules. Or transplanting the CAMM modules to another device.

I suspect a lot of those 32GB framework desktop are going to become e-waste :frowning:

Another small detail is that the PCI-express slot is not cut. You can’t fit a 16X PCIcard without manually cutting the slot. I have seen builds online with a single GPU accelerator that speed up inference on huge models a lot by loading just the critical inference stages there.

That being said the Framework with quad channel AMD APU is an LLM box, and LLM performance scales basically linearly with RAM bandwidth.

I really hope AMD and Framework can figure out CAMM modules going forward.

This product competes with Nvidia Digit, and Apple Mini, and for 2400€ it’s a favourable comparison. I think Framework will sell lots of those units.

There are two AI bubbles. One is led by the likes of Sam Altman and Elon Musk that promise artificial gods in exchange for trillions of dollars, and in my opinion is just your typical .com investor fraud.

The second is the actual value adding products. Diffusion and Transformer models that really improve productivity, tools good at a narrow set of tasks. Microsoft Phy, Alibaba Qwen, Deepseek, Mistral, Facebook llama, Forest Lab Flux, Stable Diffusion, and more. All incredible tools that run locally on your computer and help you in some aspects of your workflows.

The Framework Desktop is part of a class of cheap LLM inference machines that can run large models at low price. I’ll give you an obvious application: using one as LLM server for Qwen 2.5 72B Q4 to serve two instances of assistent for VS code to interface with and help you and your collegues writing code all locally without sending your codebase outside.

5 Likes

When AMD accelerators work, they are amazing. My 7900XTX finally does 100 tokens per second in Qwen 2.5 14B. But my glob it is hard to make ROCm acceleration work… For reference the vulkan acceleration works without ROCm but gives me 20 tokens per second! It took me three weeks to make ROCm work on LM Studio and Flux. And for flux there is an added abstraction layer with zluda to make pytorch acceleration mostly work. Mostly, because some things just don’t…

AMD has made great strides with drivers with Adrenaline. I really hope AMD can make ROCm works flawlessly eventually.

1 Like

Bad usages and not understanding what it does is why a lot of that happens. Poorly written policy? I bet the guy responsible just left the “AI” to it, probably didn’t even read it after. Healthcare oh man have worked in the sector enough to know there are worse problems than AI and only the worst would leave their note to AI.

Transformers are less AI than people think and more of autocorrect on steroids. Do you ever trust that co-worker who is always telling tall tales?

A lot of boiler plate code and fluffing emails, automatic menial tasks is where it is at. Using it instead of exercising proper judgement and people using it instead of doing their job is just plainly wrong. Plenty of examples about this.

Replacing quick Google searches is a pretty good use. Are the models 100% accurate? Hell no, but also web searches aren’t either. I would dare say with the amount of disinformation these days they are on about the same level. People should only ever use websercheas and AI prompts as a quick and dirty reference. These systems are not replacing experts anytime soon.

I recently helped a friend improve his resume that way. Resumes are parsed by AI, use AI to write them to at least get past the screening. I find this more acceptable than what a lot of people do: lie.

The training data ownership is a really concerning area. So many companies stole billions worth of information and nobody is holding them accountable. The future of AI should be open, unlike say “OPEN-AI”.

Please don’t get me wrong I think we are on the same page of not using it for sensible areas, not only that but also not taking what models spit out as facts. Current AI is not even reasoning. Transformers are more of a computer trying out random things until it finds something that kinda works and that becomes the model. It mimics a pattern but not the function of it (unlike some other machine learning). It seems to write text like us, the output at least, but how it gets to it is completely different. This last part is what is worrisome. If you think of it as a function most models right now are a function that predicts the next set of words without any “thought” to it, even “reasoning models”. Clever tricks make it go down some steps of “thinking” but it is certainly not thinking.

Yeah, if AMD could make it so that getting started was much easier, instant win.

1 Like

You guys confuse me

1 Like

If it properly implements the DDR5 spec then it will be perfectly fine to put 5600 MT/s modules in, they’ll just automatically run at 5200 MT/s instead.

I expect that the pre-built Framework Laptop 12 will probably include 5600 MT/s modules (as stocking 5200 MT/s would require additional logistics for Framework). That’s what my prior Thinkpad did (2666 MT/s DDR4 module that ran at 2400 MT/s due to the CPU not supporting higher).

Modern laptop CPUs don’t use chipsets. The IO capabilities built into the CPU are what you get.

Framework could add PCIe switches to share lanes (which is how chipsets work internally), however those add cost, complexity, and power draw (which is why laptop CPUs previously moved away from chipsets).

4 Likes

Lets say in a year / year in a half there comes a new desktop mainboard. This old one would still be great homelab/nas mb. And you could stick it to general mini-itx nas case. Even the lower end mainboard would serve as jellyfin/plex server for a long time. the cpu power would propably be pretty overkill but the igpu would handle transcoding etc really, really well.

But like everything, for a general user it might be just e-waste but I guess the people buying frameworks buy them atleast somewhat this kind of scenario in mind.

1 Like

I suspect that the FL12 MB will get subsumed into such a thing by some enterprising person. It has been done with an FL13 MB, and I suspect the FL12 MB will be enough different to make the project worth revisiting.

I could see businesses who do CAD work buying these up like mad. All those GPU cores will make 3D CAD work run smoothly.

2 Likes

While it has largely been presented as a gaming/AI device I see it as a CAD workstation with a significant amount of memory, a heap of GPU cores and a cranking processor. The business world isn’t that concerned with upgradability, they will pension the machine off after 3-4 years and replace it with a new one. So soldered RAM is not a problem, but straight out performance, especially for CAD use is the big deal. And at that price it will hit the spot financially I think you will find.

Been there, done that - doing CAD work was part of my employment.

5 Likes

Let me elaborate:

FW13 is a laptop. My use case is something with around 4h of battery that can run cads, ides, and llms.

FD is a desktop mini itx. But you can’t easily add a GPU (e-gpu are more expensive and slower), nor you can repair change memory configuration. It has a very specific and narrow use case for workloads that need lots of unified memory, bandwidth in between a CPU and a GPU. It’s a one trick pony.

Now, I believe FD-128GB is a killer product for the narrow use case of running 70B coding/writing LLMs. But the lower memory variants in my opinion are going to become obsolete far more easily because they compete with cheaper repairable expandable mini itx systems. E.g. I can get a dual channel Ryzen, and add a GPU later when I have the money.

Agreed. In my ideal world, FD would have been the new FW16 main board, and FD would have been a case to stick just the main board in.

But I’m nitpicking here. Framework is a smaller company, They lack the manpower Dell and the big bois have to make products. I deeply appreciate they make FW13 and keep making its parts. It would be a really tall order to make a quad channel FW16 mobo with CAMM on their fist try and FD-128GB as it stands is an Nvidia Digit killer. Perhaps it doesn’t need to be anything more than that.

I would be curious to see the productivity benchmark! I know solidwork is really single thread bound. I also wonders how it plays with Unity, Unreal and other game dev tools.

I see everyone moaning about this, but I come back to the loop that this is really aimed at commercial use where upgrading is not important, but outright performance is. And all the moaners about soldered memory clearly don’t have an appreciation of the difficulties in designing the PCB to squeeze that extra bit of performance out of the hardware, and how having to route signals up through connectors kills that performance.

This comment isn’t aimed specifically at you, but at all those who have a problem with soldered memory generally.

2 Likes

Framework 12 has a 13th gen Intel i3 as the top spec - anemic GPU and anemic cores, so it wouldn’t really be able to compete with any gaming handheld

This is a misunderstanding. Hundreds of chinese etc. companies will (or have already) created a cheaper comparable device “where upgrading is not important, but outright performance” is. Framework will not be the winner in this competition. They wont’t do good in this competition. If they go on with products such as this desktop, they will end up to be just another throw away electronics company, and finally go broke.

Anyone wanting just raw performance and ideal price/performance will clearly not buy Framework. You know that yourself.

Framework’s basic business idea, which is also their most important selling proposition is “fix consumer electronics” etc. This is basically why they are attractive, and why we are here.

5 Likes