Temperature accuracy

http://www.geo.uni.lodz.pl/~icuc5/text/P_6_5.pdf
http://www.paroscientific.com/met3temptest.htm
http://www.austehc.unimelb.edu.au/fam/1484.html
http://www.msc-smc.ec.gc.ca/education/imres/42_instruments/4_2_1_1_temperature_e.cfm

I have been disturbed at the inaccuracies of temperture measurements. The above documents mention some of the problems related to Stevenson and other screens due to solar radiation. Less frequently mentioned, there is ground radiation and convection. My own experience demonstrates that wind speed is very important and the thermal inertia of the screen itself.

Some reference mention errors of 10 deg C as being observed and 2-3 deg C as being frequent with unforced-ventilated screens and half that with forced ventilation. In my own observations, I find a very calm, sunny day will give an error of up to 4 deg C in the early afternoon and -1 deg C at night. With a 4 m/s average wind, these error values more or less disappear.

I’m interested in developing a housing where these errors become negligible.

I would welcome comments.

There’s this one too…

http://www.davisnet.com/product_documents/weather/app_notes/apnote_24.pdf

After adding a fan to my WS-2310 and now having my Davis, I feel a fan is critical to good temp readings regardless of where the sensor is located.

Might I ask how you determined that your temps are off the mark? How is the reference thermometer being protected from the same radiation and convection problem your having??

Have you read the CWOP Guide? It’s a good reference that addresses most issues found with amateur weather stations.

Good point Dan.

The CWOP guide is another excellent link.

In my case, I use the local METAR, which is ~1.5 miles away. I also love to view Weather Underground’s Google maps…

http://www.wunderground.com/stationmaps/gmap.asp?zip=68101&magic=1&wmo=99999

(You just have to keep in mind that METARs report in whole Celsuis values and convert to Farenheight - so some temps are unavailable to view. As an example, you will never see a METAR with 80F. It’s either 79F or 81F.)

I would also say CWOP’s data quality tools can be another way to evaluate measurements. Although, I’ve found that you have to interpret their analysis.

OK, thanks guy for your thoughts. To answer your questions. My standard is a laboratory mercury-in-glass with a recent calibration certificate. On a heavily overcast day, the difference between it and the sensor is generally better than 0.3 deg C, which ain’t bad. As the bulb of it is reflective I assume it is relatively free of radiation errors. On a bright, sunny day, when the thermometer is placed in the shadow of my radiation shield on a windless day, there is a 4 deg C difference. Poking the bulb through the louvres of the shield and I get essentially similar readings to the sensor, proving the shield is not 100% effective.

The shield is a stacked plantpot saucer type in semi-glossy white-pigmented polyester co-polymer (I suspect from recycled polyethylene terephthalate). I have tried it with the bottom open and closed: the difference is difficult to quantify, but it is not enormous, probably less than 0.5 deg C. Unfortunately, I cannot place it anywhere but in a shadeless south-facing garden. As yet, I have not tried forced ventilation. I am considering trying placing an aluminium foil-covered sheet round half of it so that the whole shield is never in direct sunlight, even though it will restrict wind access; this could be oversome by forced ventilation. Even more tempting is the aluminium-foil covered horizontal tube, as described in one of the above references. This would have the advantage of being relatively immune to albedo errors, which are not negligible.

To get an idea of albedo errors, before I “permanently” installed the sensor, I did a number of unscientific, qualitative, tests. I fixed it to a 1 m pole, with no radiation shield. Over several calm sunny days, after the sun went behind a nearby hill in the afternoon (~15:30-16:00) I stuck the pole in different substrates and allowed the readings to settle. My “standard” position was over a pale sandstone crazy paving and I took a reading there about 30 minutes after the sun disappeared, moved it to over a different substrate, took a reading of temp and humidity and then moved it back again. I averaged the start and finish temps and RHs and assumed that, as the drop in temp was reasonably linear with time, this would be the condition at the time of measuring over the test substrate. I obtained the following results:

Over Bermuda grass, cut short: T = -2, RH = +8
Over Kikuyu grass, cut to ~75 mm: T = -3, RH = +10
Over cultivated soil: T = 0, RH = +9
Over packed soil: T = 0, RH = +5
Over red-dyed concrete: T = -0.5, RH = -5

There was no irrigation or rainfall during this period. I’d guestimate the margin of error to be T +/- 1, RH +/- 5. T in deg C, RH in %. Note these figures are always the delta-T and delta-RH, so the absolute values were probably not material. I concluded from this, that it is an error to place a hygrometer over grass or soil and a thermometer over a hard surface! As the sensor was combined, I had to compromise, so I chose the red concrete, which made for a more convenient installation.

This is my current thinking.

Hi Devil,
I have a Davis 1. When I first got it there was no fan. I got temp readings to high. I purchased a fan from davis and it does a very good job of keeping the readings normal. I think that with a fan you will get much better readings.
Chuck

I agree with Chuck, the aspirated shield is the best solution for most applications. We should all strive to have our data as accurate as possible, but at some point we should consider the limitations of an inexpensive weather instrument and imposed tolerances.

I thought that was a bad assumption and so I tested it. I used an ASTM 0.1 deg F resolution mercury thermometer (see pic below) which I imagine is similar to the one you used. I placed it in front of a fan circulating room air and after sufficient time for the reading to stabilize I turned on a 75 watt incandescent lamp approximately 1 foot away, the thermometer reading then stabilized 0.5 def F higher. After the lamp was turned off the reading returned to the original value.


i was thinking the same thing too…reflected heating will give you a high reading with a theromometer placed in the shade of the stevenson screen…which is what a stevenson screen does, stops that reflected (infra red, short wave you name it) heat :wink:

I didn’t say the MinG thermometer was free of radiation errors. I said it was relatively free. In your experiment, an error of <0.3 degC is reasonable for a powerful long wave radiation source 30 cm away. Remember the colour temp of an ordinary light bulb is about 3000 K, as opposed to the shorter wavelength of 5500 K for direct sunlight, rising to 8000 K for north light on a sunny day.

My experiment in placing the thermometer in the shadow of the screen would give even worse differences in radiation from the screen itself (very long wave) is significant. The fact that it was the non-illuminated side of the screen should mean that the difference would not be important.

I know, I quoted your own words :?

In your experiment, an error of <0.3 degC is reasonable for a powerful long wave radiation source 30 cm away. Remember the colour temp of an ordinary light bulb is about 3000 K, as opposed to the shorter wavelength of 5500 K for direct sunlight, rising to 8000 K for north light on a sunny day.

I haven’t run the math but empirically I believe the heating effect of that lamp at that distance is significantly less than that of full sunlight. In your experiment you are claiming to measure 0.5 deg C differentials so in that context 0.3 deg C would be significant.

My experiment in placing the thermometer in the shadow of the screen would give even worse differences in radiation from the screen itself (very long wave) is significant. The fact that it was the non-illuminated side of the screen should mean that the difference would not be important.

To have validity I think you need to add solar radiation to the measured parameters.

I made another simple discovery of a solar radiation source, yesterday. I peered into my shield and observed two means of radiation ingress:

  1. the plastic used, although heavily pigmented, is not opaque. As a guess (no measurement made), I would say that ~10% of the incident light is passing through. Of course, although it appears white to the eye, I have no idea of the spectral response in the IR; it could be quite transparent.

  2. some of the light striking the louvres is scattered upwards, and is then reflected inwards by the next louvre upwards. This is variable with time, as the angle of incidence varies according to the relative position of the sun, being less important at the solar midday.

Today, I did something quite radical. I bought some 5 cm wide adhesive aluminium tape, used for wrapping the insulation on steam pipes. I stuck it round the outside of the plantpot saucers of the screen. A preliminary comparison, after giving it time to settle down, would indicate that the measured temp has dropped 3-4 deg C.

The weather today is substantially similar to yesterday, intermittently sunny and warmish. If you look at the attached 48 h graph, you will see that yesterday the temp yo-yoed like mad in the afternoon between 26 and 30 deg C as the sun appeared. This morning, the temp rose similarly to yesterday but, as I applied the aluminium this afternnon, it dropped with much less yo-yoing. The indicated daytime temp now corresponds within a degree or so to my HgiG and my other outside electronic thermometer.

Peeking inside the screen shows a MUCH reduced light level, as the aluminium is opaque, whereas the saucers were not entirely.

My preliminary conclusion is very positive and I think I can recommend applying a specular coating to screens.


Looking albeit briefly at your graphs, the yo-yo effect seems only slightly reduced and also seems to be coincidental with wind speed / direction variation. The RH rise & fall seems to be the inverse of the temp.
I had similar but not identical readings until I aspirated my screen.
This, within the parameters of the instrumentation, stabilised both the temp and RH. To totally exclude IR frequencies IMHO is less important than inhibiting warm or moist air pockets especially when there is only a zero or light breeze.
The fan is powered by a solar panel sited next to the screen, so the aspiration rate is roughly proportional to the insolation.

Yes, I agree that air movement also plays an important role, and I’ll put in a ventilator in due course. However, I’m still satisfied that a large part of the errors were due to radiation. Incidentally, you will notice that the wind was much stronger on the 21st than on the 22nd, so if air movement were more significant, then the indicated temp should have been lower on the former, not higher.

Of course RH is inversely temperature dependent. RH is defined as the percentage of water molecules that the air contains, compared with the amount it would contain at saturation at the temperature of the air. Only absolute humidity (usually expressed in g/m3) is temperature independent. What is more significant is that the dew point is substantially similar on both days, although that, too, is temperature dependent. The fact that the reference method of measuring humidity is the wet and dry bulb psychrometer shows that it is temperature-driven. Put it this way: if the absolute humidity of a mass of air is absolutely constant, the relative humidity will be a perfect inverse function of the temperature.

Put it this way: if the absolute humidity of a mass of air is absolutely constant, the relative humidity will be a perfect inverse function of the temperature.
and the dew point will mirror the temperature line

Agreed. In fact, automatic dewpoint detectors are probably the most accurate way of determining humidity, albeit they are very expensive. They consist of a bright polished chromium plated thick copper plate welded to a Peltier junction and with a Pt100 resistance thermometer fixed to it. A collimated light beam is directed onto the plate at a 45deg angle and a photodetector measures the reflected light intensity. A ramped DC is passed through the Peltier junction to cool it down and, as soon as the reflected light starts to diminish because of the formation of dew, the temperature is measured with an accuracy of better than 1/10th of a degree. The ramped DC is then reversed for a few seconds to evaporate the dew and the temperature is measured a second time when the reflected light is restored to full brilliance and the two temps (usually with about 0.2 deg difference) are averaged, thus eliminating hysteresis effects due to the dT/dt delays because of the ramping. It works also with dew points below 0 but the accuracy is slightly poorer because the evaporation is delayed as more heat is required to melt the frost which is otherwise too slow to sublimate; in this case, only the condensation temp is considered. About 10 measurements/hour are possible. This kind of instrument should be used in a draught-free environment.

After reading your description, I can understand why amateur weather stations use a humidity sensor instead!

I used to use automatic dew point meters in a previous job, they were made by Michell (if I remember correctly). The ones that we used were laboratory standard instruments, used for measuring ultra dry air, with dew points (or frost points) of around -40

After a few days experience under varied weather conditions, I can qualify the sticky aluminium tape as a success. It is not perfect, but it has reduced errors by about 70%, give or take a little. I’m usually within 1 deg C of my references, in either direction.