Wow! FWLP16 - Compilation Performance is outstanding!

This week is the first full week I have my Framework Laptop 16 (Batch 18) up and running for real work.

Currently I have a few projects that were transferred from my 13" FW and the compilation times reduced noticeably. I have not done time studies but will do so over the course of the next week.

First off. I use VSCde for node, JetBrains pycharm, Rust Rover, Clion and for html not (Node) WebStorm.

I am using on one project Svelte and Tauri, LLama for AI model [private] and storage from postgres, mongo and several graph implementations.

The compilations are extremely fast initially my project A on the 13 would compile in 5.3 minutes on the 16 less then 2.

The ILLAMA responses (very small test model) about 40 seconds text and graphics +9 minutes on the 16 5 and 6 respectively.

This was just a quick local test so its not suppose to be in production just round tripes for continuity and setup validation.

very pleased with the 16’s performance!

4 Likes

Have you tried the LLaMa-3 model with 70 billion tokens with Ollama? If so, how are your response times? I’ve tried it on the GPU and it was very slow. The regular LLaMa-3 performs very well. (64 GB RAM, dGPU, 7940)

1 Like