Any plans on implementing Unifying SoC in the future?

Unifying memory/storage into the motherboard chipset improves interconnection between the components & helps CPU communicate faster & increase overall benchmark scores.
I don’t know if it can be possible in the future because it obviously goes against the upgradability. But the computational gains are drastic, so I thought it’s a good idea.

1 Like

That’s the first time I have ever seen anyone state that a singlehboard PC performs better…

Other than shorter memory traces theoretically allowing for lower latency… I can’t see how it would perform any different?

2 Likes

I think the problem here is that everyone will be comparing regular computers to Apple ARM SoCs, conveniently forgetting that they are totally and completely different things!

Of course, an ARM SoC can be made to be very performant on paper, as Apple has proven. But that comes at the expense of literally everything else! It’s expensive, impossible to repair, impossible to upgrade, too tightly integrated to sensibly provide enough options for the whole market, and dangerously reliant on a single supplier.

How many people actually need the performance that even the most basic Apple M1 offers? Very, very, very few. Even of Apple users, very few people will actually use that machine to it’s full. But Apple have complete control and people will buy them anyway irrespective of the price, so why would Apple care to make a more sensibly priced option that’s better suited to what most people need?

As a bit of a reality check - if I go to the biggest high street computer retailer in my country, and put all laptops in order of price - the first Apple product in the list is number 246 of the 362 models they have for sale. So 68% of their laptop SKUs are cheaper than the cheapest Apple laptop - and I strongly suspect that each one of those SKUs sells a lot more quantity than each of the higher priced SKUs. Not forgetting that a fair amount of those cheaper options are quite powerful machines.

I think a lot of the time, the few people who actually need or want super high powerful machines forget that the vast majority of computer users are just browsing the internet, doing office work, participating in a video call, watching videos and sometimes playing Crossy Road. For those people, they’d be far, far better off with a lower end machine that can be repaired and upgraded over time than a super high power machine that costs a fortune and can never be repaired when they accidentally spill a drink on the keyboard or drop it!

Of course there are a few people who do actually need a seriously powerful machine - and maybe for them a SoC might be the answer - but how good is that as a real solution? It still cannot be fixed if it’s accidentally damaged, it still cannot be upgraded without replacing effectively the whole thing. So it’s a very big trade off to make, especially considering that at the moment we don’t have any particularly impartial comparisons of how the two methods really compare. You could use ‘old macs against new’, but for how long had Apple been deliberately crippling Intel macs knowing that they were going to need to show massive performance improvements on M1?

4 Likes

Even then, I’m not sure I believe that an Apple M1 SOC would perform measurably differently with socketed RAM and SSD.

It doesn’t hold desktop systems back with far heavier workloads than most laptops will ever be capable of…

4 Likes

@andyk2 @Anubis My bad. I guess this topic is more of an ARM SoC improvement rather than Unifying improvement. It seems non unified components is advantageous than unified with ARM architecture in mind, which totally seems to be a different topic.

1 Like

I would be really interested to see some genuine, scientifically valid tests of just how much of a difference it makes to have socketed RAM and Storage on both ARM and x86/64 so that a proper comparison can be made. I’m sure there is some difference in there, the question really is whether it’s enough of a difference to make a difference to enough users that it’s worth actually doing.

The problem is I don’t think it’s possible because there isn’t a single OS that is optimised for both architectures that can also be installed on all types of system :frowning: perhaps Linux will get to that point at some point in the indeterminate future…

1 Like

The entire point of Framework is to reduce e-waste. If you need to replace the entire mainboard each time you want too switch out a single component… That defeats the purpose. If you want to maximize performance, another brand would be a better idea I believe!

1 Like

Im not sure why OS matters there…

If you’re comparing ARM modular Vs ARM single-board then the variable you’re testing is the sockets.

Likewise, if you were testing an X86 modular Vs single board.

It would be pointless comparing X86 modular Vs ARM single-board, that introduces 2 variables, 3 if you count the binary differences in the OS.

Linux is more than optimised for either ARM or X86, and it doesn’t matter if it wasn’t anyway for benchmarking so long as it was consistent to allow comparing socketed Vs not socketed.

An educated guess as to the amounts of potential performance we’d be talking here would likely be in the realms of low single digit percent, at best. The kinds of performance gains seen from tighter RAM timings.

I honestly don’t think even Craple think that single board designs give them any performance advantage.

1 Like

We are all forgetting one important piece of the puzzle: the software. Apple M1 works well only because the OS (MacOS) is designed and optimised for it; anything else will show its limitations, i.e. the ARM limitations.
No, I’d rather trade off some hardware inefficiency (that can be abundantly compensated and overcome by a finely tuned OS running it) in exchange for flexibility, upgradeability, repairability, (relative) openness and that bit of freedom that a SoC can’t possibly give you.

4 Likes

Instead I think it’s their marketing tactic to hinder the upgradability & put more prices on their products. That’s one of the reasons even headphone jackbeing removed on iPhones inorder to sell their airpods.

3 Likes

There’s a project called Asahi Linux who’s trying to port Linux to M1 chips. They already done half way where everything runs on software acceleration without any hiccups. It’s the graphics drivers which is being worked upon now.
So I don’t think it’s just mac, as both software & hardware is provided by apple, they just have the edge to tweak the kernel/default configurations for optimized workflow.

2 Likes

Yeah, I’m aware of that. Unfortunately, and sadly, I don’t think that it’s going to be much more than an interesting research effort: there are so many hidden hardware specs that Apple will never release, so many “obscure” parts of the chip; and my guess is that Apple is going to do as much as they can to prevent any other OS from running on their chip, for reasons that I don’t share but I at least can understand.
Besides, I praise the effort but I think it’s pointless: why running Linux on Apple hardware where you can get comparable specs on an X86-based computer, and for a far cheaper price too? Why not running MacOS on the only hardware where it runs natively? Furthermore, why not running an OS which you paid for? (I know, MacOS it’s technically free of charge, but you pay an extra price on the exclusive hardware to run it).
Again, I praise the challenge of getting Linux up and running on that hardware, I really do, but I’d rather want to see putting more effort on the much promising Risc V architecture: it’s either going to be another “niche” hardware, or the next “big thing”, possibly larger than ARM. I don’t know yet, but Risc V seems more fit to the OpenSource philosophy that drives GNU and Linux, much more than Apple M1 or other upcoming ARM-based chips from Microsoft or HP (if rumors can be trusted).

They have reportedly made it easier for the Asahi developers by doing something that offers no benefit for macOS.

2 Likes

Good to know, but I’d like to point out a few things:

  • First, when something starts with “it looks like”, it means that there is no certainty about it; from my perspective, Apple has performed such changes only to their advantage, and if this makes things easier for Asahi Linux it’s only a side effect.
  • As far as I know, they haven’t released any specs, and nothing about their chip is OpenSource; again, they’re not proactively helping anybody and they never will.
  • The whole exercise is still pointless to me: if I ever bought an M1-based Apple device (which will never happen, BTW), I would simply stick to MacOS and run Linux in a VM; apparently, the M1 is pretty good when it comes to virtualisation, so why not? Besides, it will offer a layer of separation between the not-so-privacy-focused crap running on MacOS.
  • If one is buying an overpriced Apple M1 and plans to ditch MacOS in favour of Linux, what’s the point? Why not buying some cheaper hardware, or spending the same money on something better and much more open?
  • Just like running MacOS on an Hackintosh or in a VM, I strongly maintain that no OS will be able to run to its full potential on an Apple-designed chip: it will “basically” run, it will “mostly working”, but always with some tradeoff in performances / functionalitites, or with some issues.
  • Apple device are still as closed as they can, as unrepairable as they can, and ultimately they are not yours truly: I’m always under the impression that an Apple device always belong to Apple, and not to you.
  • Since the M1 chip has been announced, I was under the impression that Apple was giving the final push toward clamping down and locking the hardware and software together on their desktop and laptop. I won’t go into too much details on this because we have already steered clear of the topic, but it makes sense for a company like Apple to lock everything in and cut everything else off.

I stil think that this effort is admirable because of the energies and skills that people is putting into it, but it’s ultimately pointless. I’d be glad to be proven wrong about some things I said above, and also I’d be glad to hear from people that they have succeeded in running Linux to its full potential on an M1 chip. But I seriously doubt that this will ever happen, just like I doubt that Apple will make the M1 specs “open” to the public.

1 Like

It’s at least worth mentioning that Apple basically wrote a blank check for exclusive access to TSMC’s at-the-time bleeding-edge 5nm node.

This is important because, in laptops, performance-per-watt is everything, and a smaller node means less heat output which means you can either use fatter cores that will have greater performance-per-clock (the route Apple went) or simply running the cores at higher clock speeds (the route Ryzen 6000 is going vs Ryzen 5000).

It’s also worth noting that their use of big.LITTLE, much like on Intel 12th gen, means that they don’t really need as many performance cores, and Apple’s use of only 4 performance cores by definition is going to further reduce heat output and therefore further allow the thermal headroom for the M1’s “fat” cores.

…which becomes ever more evident on the likes of the M1 Max where, with all of those performance cores, the battery life really takes a hit.

1 Like

That makes a lot of sense, and I agree with pretty much everything you said. But, at the same time, I’d like to add that the CPU doesn’t work in isolation, meaning that the rest of the hardware needs to be picked up and optimised to work armoniously with it, particularly RAM. That’s why the LPDDR, or any “soldered” RAM, is much faster (in terms of latency, at least) and efficient when compared to a regular DDR module, although this hinders upgradeability and repairability (which are a “pleasant side effect” for companies like Apple or Dell). And if you can write and optimise an OS only for a single spec hardware, you are clearly at an advantage when you “compare” the whole package: MacOS is written only for Apple hardware and can talk directly to it using a much simplified and “direct” HAL model, whereas Windows and Linux have to deal with an incredible variety of configurations, with different layers of abastraction between the software and the hardware itself that hinders performances and efficiency altogether; a power user can, if he wants, compile the Linux kernel for its specific hardware with only selected drivers and firmwares (and it makes a big difference!), but as for Windows it’s simply impossible. That’s why I think that any comparison between Apple and the rest of x86 hardware doesn’t make much sense, because it would be just like comparing a Formula 1 car with a WEC Hypercar.
Anyway, my final thought is that an integrated architecture like Apple’s has to be much more efficient and performant than the heterogeneous X86 world, otherwise it doesn’t worth the effort (or the price); however, I’m not willing to pay the price for such “efficiency”, because on Apple’s world the user is not in control of anything. And, sticking to the cars analogy, I’d rather drive a GT3 car which I can customise to my needs, though it’s bulkier, slower, less fuel-efficient than an Hypercar… But it’s far cheaper!