Weird dGPU request

@nrp pretty simple request, can you say definitively that any dGPU model will include a MUX switch?

For those unaware a MUX switch allows a dGPU to directly route a video signal to a display instead of first routing signals thru an iGPU-this helps GPU performance by not an insignificant amount

A good explanation by JarrodsTech

4 Likes

I also would like to see this happen. It’s just that if they where to make this a thing it would likely make the footprint of the device bigger, which intern would force them to design and manufacture all new chassis. There might be other ways do such thing but expect it to be a while before we would actually see anything like that.

Till then though we are stuck with EGPU’s via the ‘Thunderbolt’ ports

1 Like

No.
Because they can also just pass the dGPU data through the iGPU (which of course result in speed penalties, but however is supported) because a mux is about $100 to implement (excuse me wtf)
yeah.

2 Likes

I understand the interest, but there isn’t even a definitive statement that a dGPU will be a thing. Are we doing wishes for wishes now?

I’m sure they’ll be relatively open about this stuff. And if they say yes to the dGPU and no to the MUX switch, I guarantee that they will still sell like hotcakes.

1 Like

Hardly that-if they don’t do a dGPU model that would be fine with me ultimately, that’s a feature I’m not married to

I just want to know that if they do indeed make a dGPU model it will include a MUX switch

And stating that they can put out any model and it will sell like hotcakes is just well…reprehensible is a bit strong tbh but it’s the right general sentiment

I expect better of Framework than I do Dell or HP and I would hope you do too

1 Like

Actually didn’t know that, that’s pretty expensive for a chip with a relatively simple task

Although it is arguable that the customers that care about such things
(Gamers or Professionals) tend to be willing to shell out for top tier performance

3 Likes

I do, and the discussion around the value of MUX switches is relatively new and rather exciting. I’ve been seeing more reviewers discuss it, including Linus@LTT, who’s personally invested in the company now.

2 Likes

We’ve seen framework be relatively open with the community in terms of both feature requests and updates. I’m sure they will be looking at the costs/benefits of a mux solution as they develop a IGP enabled mainboard (if they do).

1 Like

I assume you want discrete graphics.
only problem(s) will be, really, size (added cost potentially), and heat. Which leads to more problems with cost, and heat.

Only problem with a mux on graphics will be, well, board complexity, which lead to (potential) size problems, and cost.
You can’t make the mux optional. Unless they (users) don’t want discrete graphics.

2 Likes

More relevant now with the 16 Laptop having a custom module

To be fair the only thing I expect from them is to make things repairable. They’re a startup, not a company with a large well of engineering talent.

That works for me if it is clearly disclaimed at purchase that there won’t be any kind of support as Pine64 does. And I expect a purchase price to match those expectations. When you sell a 1K+ laptop, my expectations change. I’ve stated elsewhere plenty of times my grievances and don’t wish to derail this thread so I’ll leave it at that.

I am still not 100% sure if there is a mux, but (judging by the connector pinout) if there was it would be on the mainboard so any expansion bay module could use it. The expansion bay definitely does have access to the internal display though and I am not sure how you would do that without a mux but my knowledge isn’t exhaustive so there may be other ways.

Where the hell are you pulling that number from?
DP mux chips are expensive but nowhere near that expensive (like sub 10$ in bulk which is a lot for a component but still an order of magnitude off), not even pcie switches are that expensive and that are some of the most expensive main-board components I know off.

Unless you mean some nvidia slapped an fpga on a thing and charges everyone 100+$ for it type bs like they did with gsync.

Ok yeah no it’s not a $100 chip.
However, you do need the peripheral (e.g., power supplying, signal lanes, board space, etc) and the increased complexity it bring to the board (or boards).

It’s like how the most expensive Thunderbolt to PCIe bridge chip is like $20, but the cheapest “product” is at least $120.

It might be an estimation from somewhere calculated by comparing similarly-specked laptops with and without a mux. Don’t take it seriously.

You can wire the expansion bay GPU to the internal iGPU and let the iGPU pass-through the display data to the main display, rather than wiring both to the display panel via a mux.
Because of this you will lose some bandwidth/performance.

I would imagine that a mux for redirecting the dGPU signal to the USB-4/Thunderbolt is unlikely, since those are wired through (most likely) PCIe. But what you can do is have dedicated GPU-attached ports (much like on the Alienware or other computers) for the display outputs.

Those 120$ options mostly use reference boards that cost a lot more than just the 20$ chip but all in all I don’t disagree with your point there.

Sure you can just route the image through the pcie link which in fairness work pretty well these days (even on linux as long as you don’t use nvidia) but what I mean is there is an actual edp channel marked for the internal display on the expansion bay connector. And I am still not entirely sure how you’d route multiple edp signals to a single display without a mux.

Yeah that’s probably not a thing, the usb4 controller and the dp link for it are all inside the soc, this isn’t like back in the day when you had a dedicated tb chip where you had to pipe in pcie and displayport and all that.

Doesn’t the official framework 16 gpu do just that?

1 Like

It does do just that.

huh. Interesting.

This is probably so that the GPU don’t have to “back-feed” the display output through the PCIe link, which adds complexity. It just goes out via eDP like a traditional desktop card does.

perhaps … hm.
Then it probably have a mux?
I don’t think you can “inject” an eDP signal into an Intel iGPU. Unless of course you actually can. We probably won’t know that. But it might be that Framework incorporated a mux into the design, which would be nice.

eDP and dp are not that different, edp just has somewhat relaxed requirements such as allowing less than the normal 4 lanes and and a few protocol differences (like power saving panel self refresh and things like that). If you can handle powering the screen (and some of the enable signals and stuff) you can straight up wire a normal display-port cable to an edp screen (no active adapter needed).

The pinout does call it “DisplayPort interface for internal display” so it might just be straight dp but ont he other hand there is a backlight enable pin right next to it which makes it look kinda edp like to me but as I said the line is quite blurred and at this point probably mostly semantic. You can convert between the 2 with mostly passive adabters (for edp to dp you basically just need to add a connector to to it and ignore the power bits and for dp to edp you just need the cirquitry to handle powering and controling the display and wire up the data in the rightplace.) and kinda by definition an ebedded dp link would be edp XD.

There being a mux is my current theory but so far there was no confirmation. I don’t know of a practical way to pipe a dp signal into a modern cpu (appart from capture cards but that’s not what we are talking here) and it would not make a lot of sense in the first place, if you want to do that prime works just fine, just give it more pcie lanes if you are bandwidth limited.

1 Like