Some more analysis based on the ADP Issue.
Question: What are the predicted symptoms that could be a result of the ADP current sensing issue?
Answer:
Detecting the presence or absence of a PSU might not work.
( if the EC sees a 20V psu, it sets a threshold voltage so the charger can detect if the psu is disconnected)
Detecting whether to use buck or boost might not result in a buck or boost done at the right time.
unpredictable output voltage from the buck/boost.
difficulty in knowing when to charge / discharge due to (3).
input current loop functions might not work or current limiting threshold might be acted on unnecessarily.
From the video, we can see:
a) ADP input current measurement mostly random, but volt measurement looks OK. CSIN/CSIP pins.
b) doing buck/boost keeps flipping.
c) battery showing charging / discharging even with BGATE being off.
I think I will modify my ectool so that it collects a dataset to a file on disk as an option.
I will also look if the charger chip has any measurement averaging that we could switch on, that might help.
If anyone else would like to test the “ectool chargegetregs 0” command, they would also need my EC firmware installed. It would help getting a sample dataset of more than one.
Discharging with BGATE off should be possible though the body diode, though the voltage needs to droop a bit more for that to happen and it’s less efficient. The datasheet does mention that at some points. For short blips of a few W keeping it off is probably a good tradeoff.
This may be an issue with taking point of time measurements of stuff that is happening quite fast. It may well be pulling 300W out of the input capacitor while charging the inductor so if the measurements are quick and short term enough you might catch stuff like that.
Edit: Looks like according to the datasheet the adc gets 80us samples ever 400us, 80us should contain a bunch of switching cycles even at the lowest settable switiching frequency so not sure that is the issue.
Edit2: looking at that video the power readings are pretty much constantly way too high for a fw 13, is it possible you are using the wrong shunt value for the calculations? Once again it would be nice to have a schematic to know what value was actually used there cause I am fairly sure it isn’t the same as in the 16.
Re, body diode.
Assuming the mosfet is a AON7566 MOSFET. (I have not looked)
The Vsd is 0.68 typical, 1v max.
The body diode can handle 34A
So, if Vsys < Vbat - 0.68, the battery would discharge with BGATE off.
The battery cannot charge with BGATE off.
I highly doubt the body diode would be the weakest link there.
Also I am pretty sure I found at least part of the reason for the sky high current readings: Looks like your ectool thing just uses a hardcoded value to convert the current register to ma, that value is likely not the same for the 13, according to the configs in the ec repo the fw13 uses a 20mOhm shunt and the 16 uses a 5mOhm one so the current readings are 4x too high. Maybe use the “AC_REG_TO_CURRENT” macro that is already there and send that to ectool. The battery shunts are also different.
Yes, however it may still be on sometimes when you aren’t reading the register.
Leaving bgate off when on ac and not wanting to charge seems like a pretty good idea in this case since we only get small blips of battery current if at all so the slight efficiency loss on those small blips is not really a problem and it should do less of those as it needs to droop more.
Yes, the current calculations in my ectool were just finger in air values.
I wish the ec firmware to pass me raw values, and then let ectool do all the calculations on them before display.
I can add the shunt values to the returned data from the EC.
That also works. You could also send the raw and the calculated values which would leave the whole calculation logic in the ec allowing the ectool part to stay generic even if the calculation bit turn out to be more complicated (for some other device or something).
Actually how did you come up with the 96 value you use for input current? Just copied the one from voltage? Guess that wasn’t obviously wrong cause it is a lot closer to the 88.8 for the 16 than the 22.2 for the 13 XD.
Dividing the readings from @mgcarlson by 4.3 definitely makes them seem a lot more realistic, also makes the swings a whole lot smaller.
I had an external usb-c power meter, and just picked a value that approx matched that. The 88.4 came out a little low for me on a FW16. It is not an exact science by any means as the value changes on each measurement and my external usb-c power meter is uncalibrated and probably has a different averaging parameters.
It also appears to have some non linearity. Larger values appeard more accurate than small value.
But just looking at the raw register values. The FW13 values jump around a lot more than the FW16 values.
Well the external meter probably has some averaging, if I read the ec/ectool code right there you get one single point in time measurement so I would expect that to be a lot more jumpy, modern boosting algorithms have very variable power consumption which is the most extreme at lower loads when it switches from doing nothing to doing quite a lot and back.
Personally I would go with the datasheet values since the “calibrated” differences are likely different for each machine anyway (also the decision making/control loops happens based on those).
Accounting for the scale? Cause the 13 would have the raw values change 4x as much as the 16.
I think someone may have misunderstood something here (could very well be me). Using input current prochot would make some sense in standalone mode as it may throttle before crashing, with a battery you would probably not want that. However here it probably does nothing. It sets the prochot threshold to the advertised charger current rounded up to the next 128ma but sets the current limit to 90% of the advertised current so the charge controller will throttle the input current before hitting prochot and likely crashing before that triggers. Would make more sense to set the limit to 100% of advertised and set the prochot to 90-95% (in standalone mode only) and disable it when a battery is present.
Also kinda funny that there is a nice charger driver and the framework stuff just does naked i2c writes sometimes.
I have updated my ectool with some better calculations that should now give more sensible values on Azalea and Lotus EC boards.
It also now outputs raw register data to a CSV file if you give it a filename.
I have yet to write a small program that takes that RAW CSV and converts it into something readable.
The RAW CSV is more something other people could send me, to help me understand their laptop’s behavior.
How fast are you polling that stuff? There should theoretically be enough i2c bandwidth to grab one or two registers every 400us measurement cycle and still do the rest of the stuff it’s doing, how you’d get that collected data off the ec after that is a different question XD.
But ultimately if the charger goes into “Input adabter current limit” mode there isn’t much we can do from the charger side. We could fix the setting the current limit to just 90% and maybe even use the two stage limit thing to get a bit above 100% for a few ms (probably gonna try that once I have a development enviroment set up) but in the end the power limits need to match the hardware. If the 7840U really manages to draw >90w for long enough to dip into battery we may need to play with the power limits.
I don’t quite understand the higher cpu temps. Are they associated with higher clock speeds in amdgpu-top ?
The higher vsys of 17800mV (was about 15-16V) should cause slightly less heat dissipation from the downstream buck converters.
The flow of power from the charger to the cpu is:
Charger at 17800mV → buck converter down to 5V, 3V, 1V etc. The CPU itself never sees 17800mV on its pins.
The buck converters can generally handle 23V, so 17800mV is well within that.
Also, you can probably use “stress-ng --cpu 16” instead of browser refresh, in order to put load on the system.
Probably a slightly different throttling behavior, may not even be related. Vsys should not have much of an influence. I also don’t really see how a change in vsys would meaningfully change cpu temperature outside changes in boost behavior. Then again I am not even sure the cpu itself is aware of vsys so that is quite spooky.
Stress is good to create a steady state ish load, the browser refresh thing is actually pretty brilliant to create a spike which is pretty good to provoke dipping into the battery.
Once you are in thermal limits dipping into the battery should not be that big an issue anymore XD
Man once the parts for my ccd come in I gotta do some testing
I have seen the ADP charge limit set at about 85% of the total the charger can supply.
It seems to be consistent across 25W (sets 20W), 100W(Sets 85W) and 180W (Sets 152W) chargers.
I don’t know why it limits it to below what the charger can supply.
In the code I mention here the logic sets the limit to 90% of the advertised value, the code for the 16 is a lot messier there (where the hell does the number 855 come from? would you not have to round the value to the nearest 512ma? (4x 128 cause of the 4x lower shunt)).
The usb-c breakout board + usb-serial board diy one XD
It might depend on the architecture of the circuit. If the battery is directly connected to the Vsys the Vsys has to be close to the battery otherwise over current will occur.
If there’s a separate buck boost the EC can simply passthrough the buck boost between the battery and the bus on light load to reduce conversion losses as the most of power draw is for charging the battery. When the charging is completed the upper buck boost (the one between PD and Vsys) can be passthrough and the battery BYPASSed to reduce power consumption and battery cycle. If high load and charging, both buck boost are active