The new improved assessment, for the years 1979 to 2008, yields a trend of +0.155C per decade from the high quality sites, a +0.248 C per decade trend for poorly sited locations, and a trend of +0.309 C per decade after NOAA adjusts the data.However, and of course there is a however with anything Watts or his svengali, Pielke Sr. touches, the press release is based on raw data, which, as everybunny knows is not necessarily a hot thing to do and provides a very interesting answer to the question of why. Venema shows a table of results when NOAA adjusts the data
Table. The mean temperature trends from Figure 17 of Watts et al. (2012) in °C per decade.
Class 1/2 | Class 3/4/5 | Class 3 | Class 4 | Class 5 | |
Urban | 0.302 | 0.294 | 0.318 | 0.299 | 0.218 |
Semi-urban | 0.341 | 0.311 | 0.327 | 0.325 | 0.249 |
Rural | 0.314 | 0.321 | 0.327 | 0.316 | 0.319 |
and notes that the successful application of homogenization confirms, rather than falsifies the NOAA adjustments, as did Tony's previous essay
Also in Falls et al. this trend was about 0.3 °C per decade. Also in Falls et al. the tend in the raw data was 0.1 °C per decade smaller. Thus I cannot see this manuscript as unprecedented. Leroy (2012) will be happy that his new siting quality classification seems to work better as judged by the larger difference in the trends between the categories. That seems to be the main novelty. This result is worth a paper, I am not sure if it worth a press release.In that, of course, Victor is wrong, today every paper is worth a press release and an NSF highlight to be posted. One can get a hint of what is happening by looking at the same Figure 17 from the paper that Venema does.
Eli has added some dotted lines and words to divide the figure into three parts, the mean trend on the left (he has emerged from your monitor in pursuit of carrots), the trend of the maximum temperatures in the middle and the trend of the minimum temperatures on the right. The blue line shows the adjusted NOAA estimate. As many have pointed out the use of unhomogenized data to calculate meaningful trends is problematical, and as many have pointed out in the few days since Sunday the time of observation correction is among the most important, but the question remains as to WHY Tobs affects the rural stations more than the suburban or urban ones.
Update: Turns out that one of the Anonymice at Variable Variability (VVenema's blog) had spotted this yesterday. Gazumphed!:)
Update: Zeke Hausfather provides a link to this poster, on which Ron Broberg is also an author, and also this post of his discusses changes in the time of observation over the years.
With this sharpening of the issue, Eli went a merrily googling. What is it, if anything, that makes rural stations more subject to Tobs bias than urban or suburban ones. Fortunately, Ari Jokimäki, or more precisely Thomas Karl, had already answered that question. In his AMS lecture (video here), Karl pointed out that
The time of the observation also causes a problem for the analysis. Early in the morning temperature usually is lower than in the afternoon. If the observation time of some station changes for example from morning to afternoon, it causes a warming bias to the data of the station in question. This has caused a false urban heat effect. There is practically no time of observation bias in urban-based stations which have taken their measurements punctually always at the same time, while in the rural stations the times of observation have changed. The change has usually happened from the afternoon to the morning. This causes a cooling bias in the data of the rural stations. Therefore one must correct for the time of observation bias before one tries to determine the effect of the urban heat island. Karl shows a comparison between urban and rural stations after the time of observation bias has been corrected, and there’s hardly no difference when the situation of the USA is considered. In the global analysis the rural stations even seem to show slightly more warming than the urban stations. Stations are being classified as urban or rural with assistance of satellite measurements where the amount of light pollution is measured in different areas. Also some other information are being used, such as maps, population statistics, etc.Update: Victor Venema has a short introduction to time of observation bias.
Update: Victor Venema's comment from below
No Nobel price for Anthony Watts?The take home, of course, beyond confirmation bias, is the same one that Eli discovered a long time ago when Tony, Monckton, Steve and the rest of the crew were all agog at the stamp collection of early CO2 measurements assembled by Ernst Beck
An experienced colleague, knowledgeable about the US network gave me the tip to look into the time of observation bias (TOB). Thus this may well explain much of the differences in the trends of the raw data.
If this is really an important effect, I do not see it as an excuse that Anthony Watts is not an academic insider. This is something one should check before publishing and I would see this as a lack of rigor. That there is an TOB in the US network is no internal secret, but known from the literature, for example, studied in Vose et al. (2003).
Thus we now have three reasons, why the technical problems may cause a difference in the trends of the raw data:
1. Time of observation bias stronger in rural stations.
2. More problems due to the UHI in the bad stations.
3. Selection bias (bad/good stations at the end of the period may have been better/worse before)
Sounds like the first two problems can be solved by homogenization. And the third problem is only a problem for this study, but not for the global temperature trend.
Time for the Team Watts to start analyzing their data a bit more.
Russell S. Vose, Claude N. Williams Jr., Thomas C. Peterson, Thomas R. Karl, and David R. Easterling. An evaluation of the time of observation bias adjustment in the U.S. Historical Climatology Network. J. Geophys. Res., VOL. 30, NO. 20, 2046, doi: 10.1029/2003GL018111, 2003.
What amateurs lack as a group is perspective, an understanding of how everything fits together and a sense of proportion. Graduate training is designed to pass lore from advisors to students. You learn much about things that didn't work and therefore were never published [hey Prof. I have a great idea!...Well actually son, we did that back in 06 and wasted two years on it], whose papers to trust, and which to be suspicious of [Hey Prof. here's a great new paper!... Son, don't trust that clown.] In short the kind of local knowledge that allows one to cut through the published literature thicket.
But this lack makes amateurs prone to get caught in the traps that entangled the professionals' grandfathers, and it can be difficult to disabuse them of their discoveries. Especially problematical are those who want science to validate preconceived political notions, and those willing to believe they are Einstein and the professionals are fools. Put these two types together and you get a witches brew of ignorance and attitude.
Unfortunately climate science is as sugar to flies for those types.