The correction formula for humidity isn't quite right

Hello,

I’m writing here to start a discussion and keep a record on the accuracy of the correction formula for humidity in outdoor AG units.

Now that there are freezing temperatures at night, it is becoming increasingly obvious that the humidity correction formula isn’t quite spot-on, especially as the real conditions approach 100% RH. Furthermore, it does not seem to be wrong in a consistent direction. Rather, what I’m seeing is that through most of the range the corrected RH overestimates the real humidity above freezing temperatures and begins underestimating it as the real conditions approach RH 95+% at below zero. Since the humidity is in turn used to correct SGP41 readings, those then get slightly skewed as well.

There are a couple of indicators leading me to think that the correction is off, rather than that there are real differences in humidity between two sensors. First and foremost, I’m pretty close to a reference weather station (within 10km) and the results between the two rarely agree (although they do agree with my other sensors quite often). But this is not co-located, so I can’t really derive any proof from that. What I do have is a colocated SHT45 that I use to calculate the specific humidity with. For instance: Now that it is freezing outdoors and all other sources are claiming RH=100% I’m seeing the AG under-report RH by a full 4%:

Temperature °C RH% Dew Point °C AG RH Raw % AG RH% AG Dew Point °C SHT 45 RH% SHT 45 Dew Point °C
-1.4 100 -1.4 70.5 96.09 -1.88 100% -1.4

From hereon I’m going to use absolute (specific) humidity in g/kg for comparisons, as that’s the data I have easiest time wrapping my head around, and because for the most part it is a temperature-agnostic measurement (and so should come out to ~same even if two sensors measure a different RH% due to slight temperature differences; though again I’m seeing AG’s corrected temperature to be extremely accurate.)

In order to compute specific humidity from the temperature and RH% AG measures, I use the following formulae:

// t is the measured (corrected) temperature, p is atmospheric pressure, rh is relative humidity

const MH2O = 18.01534;        // molar mass of water in g/mol
const Mdry = 28.9644;         // molar mass of dry air in g/mol

const pwat = 6.107799961 + t * (4.436518521e-1 + t * (1.428945805e-2 + t * (2.650648471e-4 + t * (3.031240396e-6 + t * (2.034080948e-8 + t * 6.136820929e-11)))));
const pice = 6.109177956 + t * (5.034698970e-1 + t * (1.886013408e-2 + t * (4.176223716e-4 + t * (5.824720280e-6 + t * (4.838803174e-8 + t * 1.838826904e-10)))));
// H2O saturation pressure from Lowe & Ficke, 1974
const psat = min(pwat, pice);
const p_h2o = psat * rh / 100;
const vmr = p_h2o / p;
const specific_humidity = vmr * MH2O / (vmr * MH2O + (1.0 - vmr) * Mdry);

Without further ado, here’s a graph of humidity measurements from AG in blue and a co-located SHT45 in yellow:

or in as a delta between the two lines:

In the delta graph the values above zero here indicate higher specific humidity (and thus in turn RH%) being reported than by my collocated “reference” and values below zero indicate lower RH%.


One insight I have is that these absolute humidity delta (gray) variations do appear to correlate well (to my eyeball) with temperature (red) changes and that the two sensors start agreeing on the humidity at around 3°C, but then there are also plenty of exceptions:

Frankly I have no idea what conclusions here are at this point and I have no suggestions as to how the humidity correction might be improved. But it is pretty clear that it may need to incorporate the temperature somehow.

(Background: O1-PST built from a kit, I’m using an ESPHome based firmware, using the published correction formulae)

All that said, the correction is pretty good at bringing the measures closer to reality. The raw temperature/humidity measurements from PMS5003T are so off that somehow even the absolute humidity numbers derived from these raw values come out wrong, even though absolute humidity should remain ~same regardless of the measurement environment so long as the water isn’t being condensed as part of the measurement process.

If anybody’s curious, this is what the graph with absolute humidity computed from raw temperature/humidity comes out to look like (in green) vs. SHT45 (in yellow):

Thanks for sharing this. I assume you saw the presentation that Anika made about the development of this algorithm.

In the meantime we found out that it appears that there is a deviation in the temperature sensors within the Plantower batches and that also the installation of the monitor, primarily its exposure to sunlight and wind has an effect on the accuracy of the compensation formula.

So I believe we will revisit the formula at some point and see if we can make it more accurate.

I assume you saw the presentation that Anika made about the development of this algorithm.

I’ve seen this deck in the past, but had forgotten about it by the time I started this thread :slight_smile:

primarily its exposure to sunlight and wind has an effect on the accuracy of the compensation formula.

I’d be really curious to hear how are you thinking of compensating for direct sunlight exposure, especially given the design of the enclosure which, honestly, is a heat trap: all of the cavities are at the bottom of the device. It helps somewhat that the PMS5003T has a fan to force some airflow to PMS’ sensors, but that airflow is entirely within the confines of PMS (the fresh air enters through the bottom and exits back through the bottom rather than e.g. into the enclosure.) The increased “ambient” temperature within the enclosure is sure to have an effect on the temperature measurements of PMS5003T.

All that said, again, absolute humidity has this nice property of being temperature idempotent. Yes, the sensor being in direct sunlight will affect both the temperature measurements in upwards direction and the relative humidity measurements in downwards direction, but in principle the absolute humidity of the environments remains roughly the same.

If the raw temperature and humidity measurements from AG weren’t the nonsense they are, in principle it would have been straightforward to get correct RH by first obtaining the absolute (specific) humidity and then converting it back to relative humidity at whatever the corrected temperature is. What really surprises me is that this is not the case. Makes me really wonder what exactly is the reason for the raw measurements being so off in outdoor units when they appear to be doing fine in the indoor units. Is it because of air re-circulation?

1 Like