Hi there,
Really interesting topic. I learn a lot from it! Before this, I thought it would be a good idea to place a second sensor int he shade and put a few solar energised LEDS next to the sensor. When the sun shines, the LEDS would go on and warm the sensor up. When it becomes overcast, they would emit less light and heath. I thought this would be good to avoid the residual warmth of the jar you use. But I guess a small jar or bulb is may be a better solution. The graphs speaks volumes and for such a low cost solution it seems to perform exceptionally well!
I do not understand why you should calibrate two sensors to a third, calibrated one. If what you want to compare is the delta T between them. I would take the two sensors, put them together and measure the difference in exactly the same situation and note the difference. Best to use some points. -20 degrees (fridge), 0 degrees (icewater) and maybe both in the sun in the jar to give a reading at about 20-30 C.
This is because both sensors won’t behave lineair. So deviations will be different at each point and the Delta T will be different also. So in the winter you could change the 0-solar radiation point if the difference between the two at 0 or -20 C and +20 is somewhat different.
Second point: the accuracy of Oregon stations. Coincidentally I read a German test of weatherstations, with the OS 928 sensors among them it noted exactly the same deviations. This was tested in a lab. They noted a maximum deviation of -0,5 and + 0,4 C for the outdoor sensor. But when the original sensorscreen was used, outside, the temperature difference became 7 C on some occasions.
May be even better was the hygrometer with a maximum deviation of -1 % and + 0,8 %. That’s really astonishing. Note than WS 3600 for instance did worse on temperature (-0,7 an +1,9 C) and, although second best in a testfield of 4 stations, had deviations of -10,2 and + 1,7 % in relative humidity! Finally: the barometer did excellent also. -0,7 hPA/ +1,28 hPA as maximum deviations. Again much better than the other (La crosse 3600 had -9,6/0 hPa as maximum deviations for instance).
But the weak point of the OS was the windmeter with a maximum deviation of -2,2 m/s…0 m/s and the raingauge that had deviations of -5,7 mm/ 0 mm. The -5,7 mm is the real weakpart as OS measures only at 1 mm intervalls. A lot of rain will be lost. The range was 44-77 m, depending on the sensors in openfield which is also worse than other stations. I have to say that the range is a bit suprising to me. In another German test by other testers, they had the same outcome. The temp+ humidity + barometer had the same precision, which is oke. Only in this test the range and reliability was rated as excellent. I guess that is the outcome of what I read on different fora. And those other stations were rated less reliable with more sensor transmission failures…so may be this depends on the unit used (?).’
Another thing is that the sensors where the most powerfull of all. -21,6 dBa. Others where close, LaCrosse was at -17,7 dBA and clearly worse. So may be the receiver was to blame here.
If it comes to temperature, humidity and barometer readings you can’t get anything better. Davis is not better at these points, I own a couple of Davis stations and have thermometers in the same radiation shield (stevenson screen) with a calibration certificate and the deviations where also 0,5 K, sometimes a bit higher. Davis does a much better job when it comes to rain and wind though. I read that the WMR-200 does a better job with wind than the 928, it seems to turn much easier in low winds.
Does anybody know how accurate the UV metres from OS are? Never saw them tested…
I think the price/performance ratio of OS, although difficult to measure is on par with Davis.