[Poll] CPU and GPU combinations

Unfortunately this doesn’t ring true for my field, in my experience (though I accept ymmv). Alternatives you mentioned and also Intel’s OneAPI have potential, but adoption is slow. I still regularly use new applications that don’t support APIs other than CUDA. And a lot of longer running projects that use Nvidia’s compiler toolchain have zero incentive to switch to a different solution because 99% of their users just make do with Nvidia. In huge part due to lack of competition in the pro workstation card space. It might be different in your field, but everywhere I’ve worked I’ve not once seen an AMD Radeon Pro card where I see tonnes of Quadros…

Anyway didn’t mean to derail the thread talking about compute APIs :laughing:. It would just be a bummer if there was no Nvidia option because it would make the Framework 16 less appropriate for a good deal of professional work. (and it sucks that this is the case).

6 Likes

Creators seem best served by CUDA cores. Developers with machine learning and AI seem best served by an Nvidia GPU and Tensor stuff. Gamers would probably want the upper tier performance and DLSS capabilities of Nvidia.

Linux users could perhaps have a better experience with AMD, but that doesn’t seem like it outweighs every other potential user.

CPU wise, I’m not aware of an argument for Intel CPUs either for the high end, or power efficiency, but Intel and AMD in laptops still seem to be behind the curve in machines that aren’t plugged in all the time ever since experiencing an M1 Max on a MacBook Pro.

5 Likes

Any names? Or are they internal?

That’s cause most places just buy what they always bought, as the saying goes “if it isn’t broken don’t fix it”. But that doesn’t mean everyone buys that. There is a big audience on Macs which are locked out of CUDA. And since Nvidia stopped their consumer facing ARM chips, you are out of luck on any mobile ARM device

End of the day though, a lot of GPU compute is moving to the cloud as you get far more processing power then a laptop. Which means it matters less of what laptop GPU you use.

It isn’t derailing. It is important to know what people need that hardware for. So this talk is in line with the intent of the thread.

Not better performance but better image quality. Performance between FSR and DLSS is much the same according to HUB.

At higher power limits, Intel is actually more power-efficient than AMD.

It really really depends tbqh.

Can we get enough in the way of a GPU to sort of pretend to run Stable Diffusion and LLMs (16GB VRAM) at say… 8s/gen? Then I want AMD + NVIDIA for dual-booting into CUDA.

If we can’t? Then AMD + AMD.

Do you have any source for that? Except for some rare workloads amd has both better idle power and performance per watt from everything I have seen.

2 Likes

One example is the Body Tracking SDK for Azure Kinect cameras from Microsoft. This camera is one of the standards for many applications (robotics, interactive installations). I am working on a project now that uses this SDK and it depends on CUDA DNN for neural network inference. Then when you start getting into the weeds with smaller projects and libraries you might need to integrate, compatibility with AMD is definitely not taken for granted.

Also in 3D rendering some pretty popular GPU-accelerated renderers like Octane and Arnold don’t support OpenCL, only CUDA. Not sure about the CAD & BIM space but I wouldn’t be surprised if there are similar restrictions there.

That’s true. A lot of this stuff can be done online. But there still is some utility in local development and prototyping with a physical GPU.

2 Likes

Looking at the github, it says it is supported through ONNX DirectML as of 1.1.0? Windows only though

Arnold does not support anything but CUDA, Octane support AMD cards and Intel through Metal in OctanceX which already means they are supporting other than CUDA. I’d imagine rather than maintaining 2 code bases eventually they’ll unify them and support AMD

But I guess at this point it is true they don’t support it.

There is more than some utility in local development. The trend towards the cloud is already being lamented in a number of research circles as the cost savings simply are not there. Cloud is expensive, and in many cases more expensive than on prem. Additionally you add having to rely on a third party vendor to be responsive and fix things, and things quickly spiral out of control. I know for a fact a lot of early adopters are now looking to return to on prem solutions. That being said I would never run CUDA on a laptop. Offload to a DGX and be done with it.

AMD CPU and Intel GPU. Stop giving nvidia money.

2 Likes

Intel CPU for me because of compatibility issues with certain USB stuff.

I would prefer Intel Arc/AMD GPU because I dun like how Nvidia is like right now but they must work in Linux for me (hardware encoding) I guess.

1 Like

I am very much looking forward to the 16" with an AMD R7 7840U APU. Besides that I’d prefer an AMD discreet GPU to work best with Linux.

I very much hope that Framework integrates a way to easily choose the amount of Vram to allocate to the APUs. I hear some laptop manufacturers allow up to 8Gb to be selected and I sincerely wish we will be able to allocate that much.

Second to that I hope that system memory will be able to be over clocked past 5600MHz as this will significantly boost performance of APUs.

1 Like

I really hope they don’t put a u class chip into a 16 inch.

1 Like

An HS would be nice but I wouldn’t be unhappy with a U class. It would make more sense for the Us to be in the 13" while the 16" has HS.

2 Likes

CPU doesn’t matter too much, but generally, I’d vote AMD GPU for the Linux support. CUDA is nice, but ROCM is catching up, and I’d rather a GPU that just works, and on Linux, that’s not Nvidia. What I do know is that I’m only a buyer if whatever combination you all choose has first-class Linux support.

6 Likes

Courtesy of HUB/Techspot

Granted this is just one data point and it isn’t based upon the newest parts but it does appear that at higher loads, Intel is more efficient. So for a workstation like the 16in, Intel might be better and for the 13in AMD might be better.

Also this quote

“However it’s not a very efficient part for single-thread; pushing this CPU up to 4.9 GHz is clearly operating it well outside the efficiency window. It not only used more power than the 12700H in this test, it was also slower, which isn’t ideal.” in reference to the 6900HS.

1 Like

Anything but Nvidia sounds great to me. AMD has great Linux drivers and Intel is starting up their drivers for dedicated GPUs on Linux. I don’t want to suffer with Nvidia’s 3rd class treatment.

5 Likes

That is Intel 12th gen vs amd 6th. But what about 13th gen vs amd 7th? Intel stays on same node while AMD moves node down to 4-5nm

Yeah that’s just an effect of having more cores. I see your point there, though amd did also announce the 7045 series with higher core counts (at the cost of integrated usb4 and much weaker igpu) that should help a lot in that reguard. I’d personally rather have something monolithic with a beefy igpu but that’s personal preference I guess.

8 zen4 cores are enough for me for the forseable future (hell I’d have taken the 6 core if it didn’t come with a weaker igpu) but for hardcore productivity stuff the 16core may be a better option, or go full mad and put a socketed x3d chip in there, the performance per watt on those is crazy.

1 Like

Well if I could find a review that did a similar comparison I would have shown that.

Edit: Alright, I found this but I’d like to see more data. Intel at idle is better than AMD but at load AMD is better according to notebookcheck. They tested i9 vs R9 as well and I’d rather see i7 vs R7.

1 Like