ReBAR Support

So I found this post when searching the forums but wanted to bring more attention to ReBAR. I intend to use an Intel ARC eGPU setup and would like to see an option in BIOS to toggle ReBAR. Given that there isn’t a way to toggle ReBAR on or off, I am still unclear on if the laptop even supports such a feature, regardless of what the post says.

3 Likes

I’m in the same boat as you right now. I also intend to use an Intel ARC eGPU on my Framework 11th gen, and I’ve already bought a Mantiz Saturn Pro, just waiting for the Arc A770 to drop. I didn’t know that ReBAR is so important for Arc GPUs, but I guess that since it’s enabled by default on Linux it’s at least possible on the Framework Laptop.

So I’ve been looking into getting an eGPU for my 11th gen framework laptop. (and I know I’ll run into CPU bottlenecks and thunderbolt bottlenecks. I’m not here for you to tell me to just build a whole system, that’s not viable for me, this is what I’m going with)

The important bit is that I was looking at which GPU to buy, as my budget is pretty limited, and I don’t need a ton of graphical horsepower, so I started looking at intel ARC GPUs, because they’re supposedly going to be a very good budget option because of how intel is pricing them.

The hiccup I’ve seen on every review (and the performance embargo lifts tomorrow, so that may change things) is the need for resizable bar. It’s an option available on desktop motherboards that changes how the CPU and GPU communicate (it allows for the re-sizing of data packets, in my understanding). I was curious if the Framework would allow for that, as it really isn’t intended for a discrete GPU, and I couldn’t find it in the bios when I looked.

I had the thought that it might be built into the thunderbolt part of the eGPU, as that’s what does the communicating. If that’s the case, should I be looking for an eGPU enclosure that specifies that?

Thanks for any help or info, I don’t know much about compatibility for this kind of thing.

It looks like there’s no problem piping ReBAR over Thunderbolt, and it has been done on desktops, but it needs to be enabled in the BIOS/UEFI firmware, and most laptops don’t expose that setting. Whether it works with a Framework depends on the defaults Framework chooses and whether they add an option to support it. I don’t think it would be enabled by default, and only a FrameWork dev could tell us if it was even there.

If you are not after the ultimate in GPU horsepower then why not pick up a two or three year old Quadro or FirePro off Ebay? At least it will ‘work’.

@Jason_Dagless because I want an AV1 encoder

Tough call then. Can’t the iGPU do that for you?

No GPU on the market has an AV1 encoder, except Intel’s cards. Both new gens coming out from AMD and Nvidia will include them but I cannot pay over $900 just for this feature. Intel has a good price that works for me. I’m willing to beta-test the drivers because ultimately, the games I play work well enough on an iGPU that performance metrics are meaningless to me.

Arc reviews are out today and I don’t care how negative they are, I’ll still buy it.

2 Likes

Just wondering, as I have an A380, but it uses PCIe Gen 4 x8, the current eGPUs are all Gen 3? Would the Thunderbolt connection bottleneck first instead?

@Jieren_Zheng Doubtful, the reason why the 6500xt and 6400 are so bad is that they use a x4 link so limiting them to gen3 is a serious kneecap. x8 gen3 is still plenty of bandwidth for these weaker GPUs. Bandwidth is only a concern once you go up to 3070/6800 tiers in performance.

I was thinking ReBar allows more data transfer but not sure if we will hit the limit of the Thunderbolt first though. Especially for modelling and rendering use cases.

There’s less scheduling overhead involved with ReBAR. PCI Express takes some significant time to set up a transfer from system RAM to the device. I think the rule of thumb from the early days of CUDA was that if your calculation takes less than a second, it’s faster to do it on the CPU than it is to send it to a GPU, set up the CUDA processing, and receive the result. The standard BAR for a PCIe device is 256 MB - that’s the most that can be sent per transfer - so you need a lot of transfers to fill 8 GB of video memory. Resize the BAR, and that’s fewer transfers.

Intel’s used to dealing with a driver where system RAM and VRAM are one and the same, and other than a small framebuffer, the graphics driver just accesses shared system RAM directly. I wouldn’t be surprised at all if there are tricks for efficient graphics data transfer over PCIe that the ARC team hasn’t figured out yet but are old hat to AMD and Nvidia.

Right, yes, the point: If the transfer efficiency helps the GPU be more efficient, it can probably help the resulting performance even if Thunderbolt drags down the maximum. But of course, real world benchmarks are needed to confirm.

1 Like

I mean, eGPU is inherently limited to gen3 x4, no? That 40gbps is the full bandwidth of Thunderbolt 3/4. Thus the same bottleneck.

Frankly, I’m still curious about using Arc as an eGPU, but I’d absolutely expect the Thunderbolt link to be the bottleneck. Question to me is if Arc’s heavy reliance on re-bar is a detriment or benefit with relation to bandwidth-limited (eGPU) setups. And, of course, whether re-bar can be enabled on the laptop.

1 Like

Theoretically that’s true. Let me share some very useful graphs, courtesy of TechPowerUp and TechSpot/HardwareUnboxed

image

image

This first graph is PCIe scaling behavior of a 3080, note that x16 gen 1 (same bandwidth as x4 gen 3) is around 87% of expected performance at 1080p, and 90% of expected performance at 4K. The performance penalty decreases as resolution increases.


This is average performance for the 6500XT, we can see that the performance penalty of going from gen 4 to gen 3 is roughly 25%-30% on average. The drop in bandwidth hurts the 6500xt far more than it hurts the 3080 and the penalty can be mitigated by increasing resolution for higher-tier cards. Don’t ask me why.

Granted neither of these graphs account for TB overhead but I don’t expect that should change the overall conclusion.

See link in first post.

Hi, jumping in with my very first post here since I just played around with an Arc & eGPU enclosure on my new 12th gen Framework :smiley:

So I’ve tried a Sonnet breakaway box 750ex on my Framework. Works like a charm!
With an old nvidia 1060 I didn’t even notice any difference to the desktop.
Then also tested an Intel Arc A770, however, and the performance difference between using it on a desktop (with ReBAR enabled) and using it as eGPU is massive in a few games. (As a side note, for some games the Arc isn’t quite ready yet, unfortunately)
I’m really curious how much of a difference ReBAR makes here, and how much simply due to the limited TB3 bandwidth, but I do believe ReBAR can be a significant factor here. For some games there was barely any difference between eGPU and desktop. For others the Intel easily outperformed the nvidia card on the desktop, but failed miserably as eGPU, and for those cases I wouldn’t be surprised if a ReBAR BIOS update could solve this?

So yeah. I’m joining in here, I’d really love to see a ReBAR enabling BIOS update for the Framework.

3 Likes

@WolfgangBlub How did you get the A770 to get to work? I own one but it doesn’t currently work. Granted I haven’t tried very hard to make it work, I’m content to wait for updates to enable support but I’d like to know what your config is currently. OS? Kernel? KDE or Gnome? MESA revision? That sort of thing so I can compare to my own setup.

@GhostLegion I’m using Arch with current packages & kernel and had to pass the i915.force_probe=<id>kernel command line (the ArchWiki has a section about it at the bottom of the “Intel Graphics” article with a shell command to get the ID and also an xorg.conf snippet (which I did not need)). (I’m using i3-wm with X11. I also have a sway setup but did not test that yet.)

Other than that, when running as eGPU I had to pass DRI_PRIME=1. For some reason Intel+Intel is a funny combination ;-). With the nvidia as eGPU both worked, programs could run with the intel card and show up on the screen connected to the dedicated nvidia. With intel+intel this for some reason is currently not implemented it seems, so some things either fall back to software rendering or crash if you don’t pass env vars to tell it to use the dedicated explicitly. Eg. firefox was unusably slow on the 4k screen without DRI_PRIME=1, with that set everything ran smoothly.

Next on my todo list when I have the time is put the Arc in the desktop again and disable rebar, that should also tell me a bit more about whether a rebar update (if there’ll ever be such a thing) on the FW would help.

2 Likes

On my 12th gen running windows 11 with an a770 in a Sonnet 750ex GPU-Z says Resizable BAR is enabled but the performance is so bad i’m pretty sure that is not true. Do we know for sure one way or another if the Framwork laptop supports ReBAR or if something else is going on? Any tips would be appreciated.

1 Like

This has only 4x PCIe Gen 3 lanes right? Not sure if that is causing the bottleneck. Considering the A770 uses 16x PCIe Gen 4 lanes, Gen 4 is twice the bandwidth of Gen 3 per lane.