Gamework (community benchmarks)

Decided to run some 3D mark benchmarks
Firestrike: Intel Iris Xe Graphics G7 96EU video card benchmark result - Intel Core i7-1165G7 Processor,Framework FRANBMCP0B
TimeSpy: Intel Iris Xe Graphics G7 96EU video card benchmark result - Intel Core i7-1165G7 Processor,Framework FRANBMCP0B
Haven’t really encountered anything else that makes sense for the 11th gen hardware.

Edit: Realized I was in balanced mode, changed to full performance for the benchmark.

Post 1
Edited: 3-6-23 [Condensed into one Post]

Game: Uncharted: Legacy of Thieves Collection
Publisher: Playstation
FPS: 30
Settings: Low, DSS on
Resolution: 1280 x 720p
Framework Specs: i5-1135g7, 16GB Crucial 3200
Notes: Game is just crashing entirely on me now, I can’t believe how buggy this game is…

Game: Marvel’s Spider-Man Remastered
FPS: 30 (1280 x 720p), 60(800 x 600p)
Settings: Low
Resolution: 1280 x 720p, 800 x 600p
Framework Specs: i5-1135g7, 16GB Crucial 3200
Notes: Night and Day compared to the Uncharted Collection, No crashing at all! Will make the fan work, but actually playable!

Game: Humankind
Publisher: Sega
FPS: 30(Early Game)
Settings: Low
Resolution: 1280 x 720p
Framework Specs: i5-1135g7, 16GB Crucial 3200

Game: Hollow Knight
Publisher: Team Cherry
FPS: 60
Settings: Low
Resolution: 1280 x 720p
Framework Specs: i5-1135g7, 16GB Crucial 3200

Game: Ori and the Blind Forest
Publisher: Xbox
FPS: 60
Settings: Low
Resolution: 1280 x 720p
Framework Specs: i5-1135g7, 16GB Crucial 3200

Game: Ori and the Will of the Wisps
Publisher: Xbox
FPS: 60
Settings: Low
Resolution: 1280 x 720p
Framework Specs: i5-1135g7, 16GB Crucial 3200
Notes: Sharpness low

Game: Project Highrise
Publisher: Kalypso Media Group
FPS: 60
Settings: Best
Resolution: 2256 x 1504
Framework Specs: i5-1135g7, 16GB Crucial 3200

Game: Need for Speed Heat
Publisher: EA
FPS: 20
Settings: Low
Resolution: 1024 x 768
Framework Specs: i5-1135g7, 16GB Crucial 3200

Game: Star Wars Battlefront 2
Publisher: EA
FPS: 35
Settings: Low
Resolution: 1024 x 768
Framework Specs: i5-1135g7, 16GB Crucial 3200
Notes: Stuttering FPS

Game: Star Wars Jedi: Fallen Order
Publisher: EA
FPS: 30
Settings: Medium(Lowest it can go)
Resolution: 1024 x 768
Framework Specs: i5-1135g7, 16GB Crucial 3200
Notes: Stuttering FPS

Intrigued that nobody have listed bench for World of Tanks or WarThunder yet
@SlashFuture BBCode Intro for you.

@Xavier_Jiang Funny thing is all I did was copy from my notepad, and I didn’t realize that it made most of the text bold…

I know this thread is basically dead, but I’m still having fun.

Running win 11 now running with the latest Xe driver as of April 2023.

Snow Runner

  • 1440x900 (Closest to 3:2 that made sense) at everything low/off
    • Min: 25
    • Max: 47
    • Avg: 42

Suggest running this with the full performance mode profile, and with the framerate capped at 30FPS for the most consistent experience. More assets on screen cut hard into the performance.

1 Like

i7 - 1165G7 + 16GB dual-channel, Win 11

DOOM (2016) - Vulkan

  • 2256x1504 @ 80% resolution scale off/Low settings
    – 35-45 FPS
    – Very choppy experience, frame times are HIGH
  • 1920x1080 @ 100% Res Off/low
    – 55-62 FPS (Vsync locked)
    – Still choppy, but frame times were a bit more consistent.

Additional Notes: The CPU load was never an issue based on the in-game metrics, but the GPU load was almost always off the charts.

Game: Star Wars Jedi: Survivor
Publisher: EA
Settings: Low
Resolution: 1280x720p
Framework Specs: i5-1135g7, 16GB Crucial 3200
Notes: Crashes upon Startup

Also, I have to eat some crow: Uncharted’s issues WERE in fact with the Intel GPU, as trying it out with a different GPU let me get past Chapter 2 so…
Yeah, the Intel iGPU is good for some things, but I’m REALLY looking forward to the AMD mainboard when it releases…

wow, didnt expect people to still be here lol. i havent really done much gaming on my framework, never really had the performance i wanted (but thats ok).

im thinking of getting the R7 + 780m mainboard eventually. would be very nice to test its gaming preformance as the 780m is supposedly able to outperform a gtx1650, and the 7840u looks like its going to be MORE powerful then my desktop cpu (3700x).

obviously this isnt a gaming laptop, and even gaming laptops can be hard to compare to desktops. even if this new ryzen IS faster on paper, im not sure if it can actually beat the real world preformance of an overclocked water cooled desktop cpu.

regardless, should be very fun to see. with all the new boards and the 16 inch on the horizon, maybe this thread will be more interesting!

1 Like

i7 - 1165G7 + 16GB dual-channel, Win 11

The Ascent

  • 1400x1050 @ 100% Resolution scale off/Low settings, CPU performance mode active, DX12
    – 30-70 FPS
    – Very dependent on what is going on, on screen.

Additional Notes: Appears to be an issue with the iGPU/Driver, but may need to cycle through a few resolutions on game startup to remove a very intense bloom effect that washes everything out. 1400x1050 leaves everything in a visible state on the screen, and stretches well to fill most of the screen. Overall a pretty playable experience.

non-related thoughts. With a recent increase in the use of this platform for light gaming on the go, I will probably be picking up one of the AMD boards to replace my 11th gen.

Any AMD Benchmarks?

I was wondering the same, I’ve seen some videos crop up putting the R5 v the R7 but in one of those handheld form factors (GDP Win) The FPS difference between the two seemed very close, and at an assumed jump for the hardware over my 11th now just looking around to see if the R7 makes sense over the R5. Hoping that some of the AMD board owners in the wild could get some gaming benchmarks up for comparison between the two.

The price difference is pretty significant if all it buys is an average of say 5FPS, especially so if like myself you’re gonna get a large uplift anyway over the 11th gen. Given the form factors in the reviews I have seen, It could also be something like the R7 is just hitting thermal limits, when it is is these smaller configs, and how does that translate to a larger form factor like the 13…

The comparison Benchmarks I’ve seen floating around:

TLDR; AWD 13 owners, please break the silence!

Here’s some anecdata!

Game: Halo Infinite
FPS: 60,
Settings: all low, with dynamic resolution
Resolution: 3440x1440
Framework Specs: AMD 7840U, 2x32GB @ 5600MT/s
Notes: Fedora and Windows

Game: Ghostrunner 1 full play through (tried a little bit of Ghostrunner 2, seemed similar)
FPS: ~40 low, average around 60FPS,
Settings: IIRC medium?, with AMD FSR
Resolution: 3440x1440
Framework Specs: AMD 7840U, 2x32GB @ 5600MT/s
Notes: I played through the entire game on Fedora/Sway. If I changed the graphics to low, I’d get ~70-90 FPS.

Game: Switch emulation on Yuzu and Ryujinx (Zelda BoTW and Super Smash Bros Ultimate)
FPS: 30 (I haven’t tried 60FPS mods)
Settings: -
Resolution: -
Framework Specs: AMD 7840U, 2x32GB @ 5600MT/s
Notes: Fedora. No frame dips on Yuzu (some on Ryujinx).

Most if not all games are playable imo, where as before on my i7-1165G7 iGPU they were not.

More details below:

Performance was similar on both Fedora and Windows. It looks like it may be possible to squeeze out more performance, but I’m over the moon with how an “ultraportable” like this performs (reminds me of my old Alienware m11x, but vastly better in every way except no RGB :upside_down_face:), especially since these games wouldn’t really be playable on my Framework 11th gen i7-1165G7 Intel Xe. I think I could play basically any game with the AMD 7840U decently on the go. Though now it also gseems possible with the Intel 14th gen/Meteor Lake iGPU!

I don’t really game much nowadays and am comparing with my dying decade old overclocked Intel i5-2500K/Nvidia GTX 970.

Heck, Halo Infinite was running better on my 7840U than on that. Granted, it wasn’t optimized and had issues on on the 970 around the November 2021 release date, and I think the i5-2500K was the bottleneck. The 7840U CPU is way faster.

From what I’ve researched, performance of the 780M is about at the level of GTX 1050 Ti to GTX 1060. Compared to my desktop GTX 970 (with the possibly bottlenecking i5-2500K), IIRC Hitman 2016 at 3440x1440 would get ~60FPS, where as on the AMD 7840U it’d get around 30FPS. However, turning on FSR or the equivalent makes it (and other games) perfectly playable at 60FPS+, imo.

Here’s an excellent thread full of gaming/GPU benchmarks (I was getting similar performance):

Regarding temperatures, per that same thread post:


I took the plunge, and snagged one of the AMD 13 R5 boards, here’s a collection of benchmarks from it.

Framework Specs: AMD 7640U, 2x16GB @ 5600MT/s CL40, Gen4 NVME, v3.03 UEFI.

Notes: Windows 11 Pro X64, Mainline AMD Driver (24.1.1, Hyper-RX Enabled globally) Plugged in (Official 60Watt), Best Performance power mode, iGPU Game mode set in UEFI (Using this mode, some games reported greater than 16GB of VRAM, most reported ~4GB). I expect that like the i7 11th gen that some of these scores would be moderately improved by swapping the paste and pads, due to an issue with one of my ports I am holding off on that at this time to see about getting it replaced. Utilizing the Hyper-RX mode from a high level summary is rendering the game at a set resolution and then scaling it up to the monitor resolution, this makes use of some frame generation technologies, and based on my experience with it thus far, I’d suggest utilizing it for most scenarios.

TLDR; The R5 is a very solid offering for the price point and provides a good increase in the gaming experience over say, the 11th Gen i7. I do not regret my choice. Hopefully we can get some cross talk on some games to determine the spread for the R7 v the R5.

  • Game: Armored Core VI

    • FPS: 1680x1050 (High)
      • In garage : 38-40FPS
      • First Mission: 35-45FPS, pretty happily locks to 40FPS.
    • FPS: 1680x1050 (Low)
      • In Garage: 58-60
      • First Mission: 48 - 80+FPS
    • FPS: 1680x1050; Hyper-RX (Low)
      • In Garage: 58-60
      • First Mission : 51 - 90+ FPS, pretty happily locks to 50FPS
    • Notes: Defaulted to “High” quality settings at 1680x1050(16:10). Set Frame rate limit to 120 to try to remove the cap. the game has a pretty vague “auto” detection system to maintain framerates, I suspect it drops the resolution or settings as needed to up the framerate, this was kept on for these benchmarks. When the game is set to limit 60FPS, the Hyper-RX run jumped to a 55-60 fps range and stayed there.
  • Game: Metro Exodus Enhanced

    • FPS: Native (2256x1504):

      • Min: 9.36
      • Max: 20.32
      • Avg: 14.33
    • FPS: 1920x1080 HIGH

      • Min: 11.95
      • Max: 28.31
      • Avg: 19.86
    • FPS: 1920x1080 LOW; Hyper-RX

      • Min: 18.57
      • Max: 36.62
      • Avg: 27.13
    • Settings: “High”; RTX/DLSS turned off.

    • Resolution: Native (2256x1504), 1920x1080.

    • Notes: Built in Benchmark “HIGH”, The benchmark takes a long time to load the first time as it builds a cache. Lots of Texture pop-in for native resolution. As the benchmark is outside of the game, Hyper-RX doesn’t recognize it most runs (It magically saw it on the low run). Suggest running this title with Hyper-RX set to a lower res, and cranking the settings down to low with those items I suspect that a 30-45 FPS average is do-able.

  • Game: Forza Horizon 4

    • FPS:
      • Min: 72.7
      • Max: 88.4
      • Avg: 80.2
    • Settings: Default “HIGH”, MSAA x2,
    • Resolution: 1080p; Hyper-RX to Native (2256x1504).
    • Notes: Hyper-RX enabled, Built in benchmark, the game failed to identify hardware and defaulted to “HIGH”, benchmark reported 70-80% GPU utilization, Benchmark reports that 60FPS target was reached.
  • Game: The Elder Scrolls V: Skyrim SE

    • FPS:
      • Whiterun : 30-40
      • Winterhold: 35-45
    • Settings: Default “ultra”
    • Resolution: Native (2256x1504)
    • Notes: No Mods, Defaulted to “Ultra Quality”, I am sure that some tweaking, say, Medium/low settings would make this a solid 60FPS title at Native resolution.
  • Game: Cyberpunk 2077 V 2.11 (Phantom liberty)

    • FPS: Native “High”
      • Min: 11.33
      • Avg: 16.00
      • Max: 23.09
    • FPS: Native “Sane”
      • Min: 19.93
      • Avg: 24.64
      • Max: 30.49
    • FPS: 1920x1080; Hyper-RX “sane”
      • Min: 28.43
      • Avg: 34.49
      • Max: 42.96
    • FPS: 1920x1080; Hyper-RX low
      • Min: 32.93
      • Avg: 41.33
      • Max: 52.15
    • Settings: “Sane” for this system is preset low, RTX disabled. For the final Hyper-RX run, I dropped everything manually to it’s lowest setting.
    • Resolution: Native (2256x1504), and 1920x1080 Hyper-RX
    • Notes: No Mods, Game defaulted to native res “high” settings with low Ray tracing, FSR 2.1 “auto” (not sure why it did that), Built in Benchmark utilized. The text/screen jumped in/out of focus, I assume as FSR/Hyper-RX was trying to do its thing. I did not enable FSR for this title after I changed from the defaults. I expect that utilizing a lower initial resolution like 720p might bump this up into the 45+ range making for a playable albeit blurry experience.
  • Game: Red Dead Redemption 2

    • FPS: 38 - 55 (witnessed) scene dependent.
    • Settings: (Recommended by the game) Ultra Textures, FSR Off, low lighting, no Anisotropic filtering, Shadows low, Ambient Occlusion medium, Reflections low, TAA medium, Vulcan API,
    • Resolution: 1920x1080 Fullscreen; Hyper-RX to Native (2256x1504)
    • Notes: Built in Benchmark run, some heavy stutter on the valentine scene. I expect that with some tuning, 60FPS is in reach.
    • Final Report (benchmark)
      • Min: 4
      • Max: 64
      • Average: 41
  • Game: SnowRunner

    • FPS:
      • Native (2256x1504): 20-40
      • 1920x1080: 45-60
      • 1920x1080; Hyper-RX : 51-61
    • Settings: Collection of medium/low based on what the game recommended. TAA Off.
    • Resolution: Native (2256x1504)
    • Notes: I expect with tuning a locked 60FPS is possible in most scenarios. The 1440x900 resolution was not available as an option for this game as it was with the Intel Xe iGPU so 1080p is stretched.
  • Game: Grid Autosport

    • FPS:
      • Min: 89
      • Max: 131
      • Avg: 105
    • Settings: default settings (medium mix)
    • Resolution: Native (2256x1504)
    • Notes: Built in Benchmark, some texture/asset pop-in observed during the test, mainly set pieces, not cars or boundaries.
  • Game: Frost Punk

    • FPS: 30-45, seems to be a very stable 30 with spikes upwards.
    • Settings: Everything off/low)
    • Resolution: Native (2256x1504)
    • Notes: same endgame village as the 11th gen runs (Most objects/highest demand), game defaulted to “HIGH” settings, High Settings provided a similar experience to the 11th gen, but very stutter filled.
  • Game: GhostRunner

    • FPS: Native (2256x1504) “Default” : 30-40 Very stutter filled
    • FPS: Native (2256x1504) “Low” : 25-30 Very stutter filled (Not sure why/how it went down)
    • FPS: 1920x1080; Hyper-RX “Low” : 25-35, much less stutter.
    • Settings: Resolution scale was set to 100%.
    • Notes: DX12 used for benchmarks, Defaulted to Native (2256x1504), no FSR, Mix of High/Epic Settings. Tuned down to low settings for subsequent runs. This game might just really not like the R5 hardware, as the R7 benchmark seems much much more playable. Lots of texture flickering/pop-in.
  • Game: Age of Empires II : Definitive Edition

    • FPS: 35-50
    • Score: 1223.9 (pass)
    • Settings: All/Ultra
    • Resolution: Native (2256x1504)
    • Notes: HD Textures pack (not used for Gen 11 benchmark), Built in Benchmark.

I Re-ran this with a 100W power supply I have on hand as apparently it boosts performance on the R7 model.

  • FPS: 1920x1080; Hyper-RX low
    • Min: 34.74
    • Avg: 42.74
    • Max: 54.76

It appears that there is the smallest gain, ran this twice and had almost identical results. So, I suspect that ideally all of these scores go up with a beefier supply. I think that the “Final form” here will include thermal interface changes to round things up another few points.

Edit: I replaced the stock TIM with one of those honeywell pads and it decreased the scores in Cyberpunk. Replacing it again with Thermal Grizzly brought me in like with the scores from the non 100 watt supply while on the 100 watt supply. So, it appears that the stock thermal paste is no pushover on the AMD units.

Ptm and liquid metal still perform a lot better but the stock setup is already pretty alright, not sure if you got fake ptm or the install failed somehow but those results make no sense.

Yeah, I’m assuming that the PTM I picked up was not genuine. I didn’t have a problem testing it out even temporarily though as I had some paste on hand otherwise, no harm, no foul.

As for the rest of it, My best guess is that there is some variable in Win11 that I’m not accounting for somewhere. Additional processes in the background, the phase of the moon, something… I’m not used to getting worse results after repasting anything.

Maybe your paste is slightly worse than what was on there already or you got an air bubble or the mounting pressure is slightly off, there are a ton of variables.

Slightly lower benchmark scores with PTM were reported in the past, in the PTM thread. Here’s some discussion of that: [Honeywell PTM7950 Phase Change Thermal Pads/Sheets] Application, Tips, and Results - #104 by Michael_Wu

Not saying that’s what’s happening here, just for reference.

With the ridiculously good results I got I wonder if there is some thing weird going on, is there someone selling regular thermal pads as tpm? It is also somewhat easy to break the stuff while applying, and if you got a hole/bubble with no contact that’s gonna hurt.

Edit: or it could be the stapm weirdness, I did notice during my ptm testing that it can cool more than 30W but after some time stapm clamps down to 30, which is still more than what stock paste can cool but less than what ptm can cool and usually less than it can cool with a temp limit of 80 (the stapm only comes down over 80 it seems). I do wonder how high ptm can go but I only figured out how to deal with the power limits after I switched to lm and I’d rather not change back just to test that now.

Framework Specs: AMD 7640U, 2x16GB @ 5600MT/s CL40, Gen4 NVME, v3.05 UEFI.
Notes: Windows 11 Pro X64, Mainline AMD Driver (24.3.1, Hyper-RX Enabled globally) Plugged in (Official 60Watt), Best Performance power mode, iGPU Game mode set in UEFI

  • Game: HellDivers 2
    • FPS:
      • Min: 32
      • Max: 50
      • Avg: ~35
    • Settings: Lowest, TAA
    • Resolution: 1280x1024 “Native”; Hyper-RX to Native (2256x1504).
    • Notes: Hyper-RX enabled, Tutorial Level, Seems workable honestly with an external mouse. I expect that on a bug planet with dozens of on-screen enemies will probably send it sub 30 FPS however. The Built in AA is TAA and is kinda needed IMO otherwise the edges are so sharp you’ll cut glass.

UPDATE: AMD Dropped the 24.4.1 Driver yesterday, and it specifically called out Hell Divers 2 as a performance target. With the same settings I am now bouncing between 42-47 in the tutorial level, bouncing off a max of 58 on the ship and in menus, and the lowest I saw it go was 35. So they’ve done some tweaking in our favor.