Is the Framework Desktop suitable for this workload?

Hey all,

Just ordered the 64GB Framework Desktop to use as a family PC, a gaming machine, and to run a local LLM for processing Home Assistant voice interactions, likely all in Linux with Docker for the LLM. My question is, will this machine be suitable for these tasks?

I read the blog post about running a local LLM and it seems like the 64GB version will be suitable for my needs, but will it be able to process Home Assistant stuff while also running a game, say?

I’m struggling to justify the additional cost of the 128GB model, especially when non-Framework alternatives are so much cheaper, but I’ll be incredibly annoyed with myself if I underspec!

Thoughts greatly appreciated, cheers.

It all depends what LLM model you are going to use. I find Gemma 3 12b very capable for the usecases you describe. This needs around 12GB of memory. Say you split the 64GB in 32/32 GB you would still have 20GB left for gaming which should also not be an issue memory wise.

But I don’t know what the performance will be in the game when a prompt comes in processor wise. The AI prompts will max out your processor to get the max tokens/s. Regardless of which CPU/GPU you have. And the 64gb and 128gb versions have the same processor performance. So there it doesn’t matter either.

1 Like

Thanks, that’s really useful. Sounds like it should be fit for purpose as we don’t use voice assistants all that much. Hopefully it’ll arrive in time for Christmas and then I can tinker.

Just beware, that once you start using local LLMs, you will want to try larger models, and may regret not getting 128GB version.

With the current trend of shipping sparse MOE models that run very fast due to small number of active parameters (so suitable for RAM/unified RAM systems), but being larger than equivalent dense models, more VRAM you have is better.

With 128GB you will be able to run gpt-oss-120b, glm4.5-air, etc. Not to mention that you would be able to keep a few models pre-loaded, like TTS model, embedding model, etc.

2 Likes