Auxilary Image Computation/Processing

My Lenovo core i7 is starting to top out for big data (residual brain imaging data). Access to the supercomputer is drying up (job is changing). My linux box core i5 is OK for code but not much else. Sooo…would like to use Framework to build something…kind of like a big programmable GPU…you know like a supercomputer, but smaller, and cheaper, a lot cheaper. Say 50-100 x 10^3 CUDA cores, something like that. It’s gotta be really, really cheap. Does not have to be fast, but it can’t crash, so it needs a LOT of memory. Fast would be nice, but slow is OK, it’s not a video game so frame rate could be really low. Just have to get all the cores to talk to each other, not drop any of the data and ultimately output it again after processing. Single precision is livable, but double precision would be better.
It’s not for AI, but kind of, it’s for large image processing…well that’s not quite right…its for processing very large arrays of relatively low resolution images with high overall dimensionality (4D, 5D, 6D, that kind of thing). It’s a lot like machine vision stuff or computer vision stuff when all is said and done. It can certainly run on a CPU, well not 1, usually 32, 64 or 128 cpus. That’s why the CUDA cores are interesting, they are abundant and the problem is relatively parallel, the code would need some major tweaking but it would be worth it in the long run. With tons of money, could just use a couple of really fantastic graphics cards and/or the new really big chips, but that’s not really affordable right now - something more like 10 or 20 old cards maybe, coupled with a few quad core or Hex core CPU’s. It’s pie in the sky stuff right now. But it seems kind of like what Framework could possibly enable. I’m looking for the framework literally, to bring it together. Maybe it’s just a GPU server.

This is quite close to what I am talking about.