7700S won't go past 35W

Hi All, I built my Framework 16 last night and popped in my drive from my prior gaming laptop.
Laptop runs fine, I installed all the drivers and everything is good. EXCEPT that the GPU won’t pass 35W.
So performance in games is not very good. It doesn’t matter what power profile I set it on and there are no more settings to adjust in the Radeon Control panel.

I’m on Windows 10 and the GPU is detected and I am selecting it to launch the game on the dGPU.

are you plugged into AC, or are you on battery?

it is physically impossible to draw more than a certain wattage from the battery at any moment - basically every gaming laptop has to throttle the hardware when on battery to account for this.

if you are on battery, plug in the AC adapter and try again.

I am using a charger. Maybe it is a bug because I limited my battery % to 70?
Nothing I do seems to make it want to draw more than 35W.
The charger I am using is not the Framework charger, it is a 140W Plugable charger. However according to HWINFO; the system isn’t hitting anywhere near 140W. 35W on the GPU happens in any game or application; even if the CPU is idle.

do you have the 180w framework charger available to test with?

in theory i see no reason why it shouldnt ramp up the dgpu at least a bit, but idk the implementation specifics

No I don’t have the charger, I didn’t order one when I ordered the laptop originally by mistake.
I am going to order the charger along with an empty shell so I can play with the oculink board when it comes out. Will probably end up using the laptop mostly on Oculink since I don’t actually go places with such a large laptop. This is more for my bedroom downstairs desk.

1 Like

Found the thread. I knew it was around here somewhere :smiley:

What are the temperatures? Are the fans spinning up? Drivers from Framework bundle or directly from AMD website?

Your power limits are affected by both the power profile AND the power source.

In AMD Adrenalin, if you go to Performance/Tuning and hover the ? next to “Power Distribution” it will show you the current max TDP. This is affected by your Windows Power Mode setting and the power supply you have connected (or lack thereof).

Changing your Windows Power Mode (note “mode” not “plan”) will cause the TDP to shift pretty drastically: (My examples below are the Total Power limits, but you can see the two individually)

With 180W Power Supply:

High Performance: 120W
Balanced: 95W
Best Power Efficiency: 85W

On Battery:

Best Performance: 60W
Balanced: 50W
Best Power Efficiency: 50W

This is obviously going to have a drastic effect on temperatures and fan loudness. I’m likely going to run on Best Power Efficiency, as I don’t think the extra performance of 120W TDP outweighs the extra temps/loudness.

Additionally, the AMD SmartShift Eco option on the same page will cause your computer to only use the iGPU when you’re on battery, even when in games/etc.

I also tested with a 65w and a 100w power supply, but I don’t have the numbers from that in front of me–but I can definitely say that the different power supplies affected the power limits.

Thanks for the information! It is showing my current Maximum as 52W.
I have a 140W power supply and my laptop is plugged in. So basically my laptop isn’t exceeding the battery max power even when it is plugged into the wall…

I have to say the laptop is really stupid fast with even this 50W limit. The power efficiency of this platform is literally incredible. However on the GPU side of things… it needs more than 42W max on the CPU to really survive.

Also are you on Windows 10 or Windows 11? I personally am on Windows 10 as I refuse to use Windows 11. Eventually without a shift from Microsoft; I will just stop using Windows all together.

So I have had quite the roller coaster of support with framework and now they have not replied to me for one week.

After contacting them about this issue with my graphics card not passing 35 watts, they told me that this should not happen and asked me to provide proof in the form of video and pictures, which I did.

After going back and forth several times (and them blaming me telling me I can’t use windows 10) I installed Windows 11 and purchased new charging cables and still had the same problem.
The best performance I’ve seen out of the laptop is on a 65W USB-C charger. 100W charger does worse and 140W charger is unstable (large wattage fluctuations on gpu and cpu randomly causing stuttering and slowdowns).

My last message from support & engineering now is that I have to use the framework 180W brick or the laptop won’t work properly. So I have a $2000+ brick that I have no use for… I really hope they decide to work on the issue and fix it, but the lack of a response at all in 8 days doesn’t inspire confidence…

I was playing around with this very thing under Linux earlier this week. Unfortunately, what I don’t have is a direct Windows equivalent, but hopefully you’ll still find some of the following illuminating.

I do happen to have a FW charger, but I also have several lower-rated USB-PD sources, as well as a brand new 250W cable with a built-in realtime power draw readout (for diagnosing a different problem I’m having with my FW16).

My benchmark for the moment is “Hardspace: Shipbreaker” via Steam/Proton. It has a bug that turns out to be useful for stress testing – if the v-sync option is disabled, it just cranks out frames as fast as the system will let it*. I’m connected to an external 1080p monitor, over the dGPU’s dedicated port on the back, and with the internal screen disabled.

I have my fingers on several performance knobs, but the two that are likely to have Windows analogs are:

  • The /sys/firmware/acpi/platform_profile – this is essentially the equivalent to your “power mode”, and lets me pick from a firmware-defined list (low-power, balanced, performance).
  • power_dpm_force_performance_level for the dGPU and the iGPU. This should in theory let me change the power behavior of the two graphics providers independently of the main platform profile. In practice, it only seems to come into play if I’m using the 180W charger.

dGPU and iGPU wattages as read from amdgpu_top’s averages for each of the devices. Battery rate from upower --dump | grep energy-rate.

Supply Platform iGPU dGPU iGPU(W) dGPU(W) USB-C(W) Battery(W)
180W FW Performance auto auto 37W 82W 166W 3-5W (discharging)
180W FW Performance low auto 0-2W 50W 165W 50W (charging)
180W FW Performance low auto 16W 67W 122W 0W (charged, stable, no remote desktop)
180W FW Performance low auto 4W 52W 107W 0W (charged, stable)
100W PD Performance auto auto 60W 48W 84W 6W (discharging)
100W PD Performance low auto 60W 46W 84W 7W (discharging)
60W PD Performance auto auto 59W 46W 46W 36W (discharging)
60W PD Performance low auto 60W 46W 46W 36W (discharging)
Unplugged Performance auto auto 54W 43W N/A 84W (discharging)

*Fortunately/unfortunately, all the entries above – except for the one labeled “no remote desktop” – had another process running that was apparently inhibiting how many frames the game could throw out (given that it does seem to be CPU bound). I suspect that in turn that suggests the above data is not representative of the maximum wattage I could be getting out of those components, vs. if I had a benchmark that could properly slam the GPU.

An additional note – the iGPU and dGPU draws are lower here in the absolute max-power scenario than they were earlier this week for me - I was reaching a continuous 98-99W (and very occasionally, breaking 100W). I’m at a loss as to why I was unable to replicate those results today. I do find it interesting that the SUM of the dGPU and iGPU in the highest-performance case seems to be 120W. I also find it interesting that the iGPU seems to be given preference - to the tune of the entire 60W power budget? - in all the scenarios where the power supply is NOT the 180W, even when the iGPU is supposedly force-set to low power. That seems like it might be a bug, and I am somewhat disappointed that I can’t seem to let the dGPU run at a nominal full-tilt, without the iGPU tagging along for the ride (and causing so much total draw that the battery gets dipped into).

That said… I have not yet found anything that actually takes as much grunt to play, as this game does in its buggy state. With the full performance settings on, and running the game with v-sync enabled at “ultra” settings, the dGPU hovers around 30W. I don’t know why the iGPU is being such a hog, when it should nominally not be involved in processing the output to the dGPU’s (nominally dedicated) port. But… that’s likely a puzzle for another day.

I think three main takeaways here are:

  1. Yes, the power supply matters,
  2. Yes, the “power mode” matters, and
  3. Even with the 180W supply (and its raised wattages), running totally full-tilt can still outstrip the supply and cause battery usage

Isn’t that literally what disabling vsync means?

1 Like

Many games have maximum FPS limits, some of them internal.

I personally was doing most of my testing using the Riftbreaker which has a GPU test mode that specifically draws as much wattage as possible to make the most fps. It doesn’t use cpu that much.
I tried a whole range of games to see if the gpu wattage was a bug but it’s definitely not.
I confirmed that the 180W charger is necessary for full wattage.

FPS limits are a thing but I would not call their absence a bug.

It being relatively light on the cpu is probably the bigger factor here yeah.

You could probably also just used furmark for that, the whole point of that is it’s huge gpu load to cpu load ratio.

I did also use furmark but the thing is that furmark usually causes heat soak and I didn’t want clocks or wattage to drop based on Temps. I wanted to see how much wattage would go into the GPU in a scenario where heat isn’t a factor.

When on, v-sync limits when the visible buffer can be altered: during the vertical blanking period only. To be as literal as I can be, turning it off removes that restriction, and nothing else. The lack of a restriction does not define what (if anything) the application will actually do with that newfound flexibility – and that accounts for our different expectations.

I come at v-sync from the direction of “the graphics card is too slow.” It should be disabled when the card can almost, but not quite, keep up: where the preference is for 90% of a frame now, over a 100% of a frame an entire extra frame late. See how from this perspective, I would not expect turning it off to cause my GPU – that can comfortably render the next frame with plenty of time left over – to suddenly start stressing out, as though I turned the frame limiter off? :slight_smile:

Note also that even with v-sync on, if the program is triple-buffered, the GPU could spin its little heart out on extra frames, with nary a care in the world. As such, it’s better for there to be a separate FPS limit, so it’s clear what the game will be trying to hit, regardless of what the v-sync or the buffering setup is. Those three things are interrelated, but distinct.

@Jimster480 - to your point, one or two entries in my table from yesterday were running the Framework on an ice pack. While I saw all the reported temperatures creep down, I did not see a corresponding increase in wattage, nor did I see the TEMP_HOTSPOT throttle indicator go away. I did previously have a hypothesis that my higher wattages from earlier in the week were due to the test room being colder (61F vs. 67F), but the ice pack test seemed to refute that idea enough for me to not bother mentioning it.

I also realized that I totally forgot to show the overall system in non performance mode, and both cards in auto. If you’re keen on seeing such a table, I’ll make one for “balanced” (the nominal default), and each of my three supplies. If you’re not interested, I won’t bother. Spoilers, the wattage limits go down for everything. :slight_smile:

Also-also – the wattages don’t add up. By a lot. I noticed that but forgot to mention it – something seems to be either severely over-reporting draw, or under-reporting supply. If I had to take a guess, I’d guess it’s the iGPU that’s overreporting.

1 Like

I would like to see a table for everything. If I am going to keep this laptop at all; I might have to try to make it work myself. Right now it is unusable overall.
As far as wattages are concerned with temps; I tried the same thing. Even in my coldest room with the temps as low as I could get them (elevated & with cooling fan); there was no difference in the wattages.
I can also tell you that the system will draw 90W from a 100W charger alot of the time, draws only go down if the battery is like all the way charged.

I’ve thrown Gnome and its compositor out the window, in preference to X11 and fluxbox.

Cards set to auto/auto. Seems like the platform setting only makes a real difference when unplugged, or on the 180W charger.

Supply Platform iGPU(W) dGPU(W) USB-C(W) Battery(W)
180W FW Performance 52W 100W 166W 8W
180W FW Balanced 29W 78W 133W N/A
180W FW Low-power 19W 69W 125W N/A
100W Performance 60W 49W 83W 6W
100W Balanced 60W 49W 83W 6W
100W Low-power 60W 49W 83W 9W
60W Performance 60W 48W 46W 37W
60W Balanced 60W 48W 46W 35W
60W Low-power 60W 48W 46W 35W
Unplugged Performance 54W 44W N/A 84W
Unplugged Balanced 50W 39W N/A 79W
Unplugged Low-power 25W 20W N/A 50W

According to amdgpu_top, all of these – except unplugged + low power – seem to be exceeding 35W, so I’m not sure where your issue with a 35W barrier is creeping in (unless it’s a problem very specifically with 140W?)

I am also (somehow, mysteriously) back to the behavior I was seeing earlier this week, where I’m able to hit 100W on the dGPU, and if I turn the iGPU to low, I can still maintain 90+W on the dGPU (and allowing the system to even charge under this load). Room temperature is 65F. I was on Gnome earlier this week, so I don’t think it was the change in window manager. It could be that there was some other process gumming up the works that restarting the graphical services cleared out.

1 Like

Well as I mentioned later; I was able to get the dGPU to 49w but on 140w it actually does have specific issues where the gpu wattage is unstable and often goes lower. I can get a flat 49w on a 100w charger.
Unfortunately this makes the performance of the laptop rather dull compared to my Alienware M15r5 from 2021 as it has a 3070 which can hit 125W. I’ve always hoped for a software to turn the wattage down on that card, but now the framework is much worse with power management.

The team has still not responded at all after I pointed out the bug in their firmware which treats basically every charger that isn’t a 180W as a lower wattage charger and forces the components to use less power.

Well I am just returning my laptop. Framework doesn’t want to fix the firmware for the laptop and despite me writing in email that I would have no problem waiting and testing firmware; they are just simply not interested. They told me that either I use the original framework power brick or I can’t expect for it to work right.
They offered me a refund and I told them I would rather not send it back and would rather hold onto it if they plan to fix it especially if there is any kind of timeline. They just told me to return instead and have already opened an RMA for me.
I credit them for extending the return window for me, but it is a shame that something I was excited for and waited over a year for just turned out to be unfinished and the company doesn’t want to finish it either.
I guess they don’t plan to support the 16" long term, or are going to redesign it at some point in time.

Can you clarify what the issue is? I am not doubting that there is one, just trying to make sure that I understand. It sounds like the power to the GPU is less than what it could be if the power supply is less than 180w. Is that correct? I am waiting for the market to deliver 240w power supplies so that the machine can run at max performance when plugged in, but the 180w is sufficient for my use case for the time being.

I’m not sure that I follow the assertion that they do not plan to support the machine or do plan to (significantly - my words there) redesign it.