By now all the bunnies are all familiar with"Mike's trick". How Michael Mann realized that if you want to show global temperature anomaly changes and your proxies end in 1980, you show the trend into the late 1990s by using the more reliable instrumental records.
But Tom Karl has a pretty good trick too. Realizing that it was impossible to jump back into the wayback machine and improve the COOP stations in the US Historical Climate Network he set up the US Climate Reference Network,
Its primary goal is to provide future long-term homogeneous observations of temperature and precipitation that can be coupled to long-term historical observations for the detection and attribution of present and future climate change. Data from the USCRN will be used in operational climate monitoring activities and for placing current climate anomalies into an historical perspective. The USCRN will also provide the United States with a reference network that meets the requirements of the Global Climate Observing System (GCOS). If fully implemented, the network will consist of about 110 stations nationwide. Implementation of the USCRN is contingent on the availability of funding.As Eli put it In other words, here is a sensible way of checking the accuracy of older climate networks in the past and calibrating them in the future. Pictures of stations in the network can be found on the web site. The stations are designed to be optimal, with respect to location, instrumentation and operation. The USCRN design is paired, that is the USCRN stations are near USHCN stations and the results of the network compared. Indeed, the Menne paper (Excellent discussions here, here, here, and here, Class 5 discussions here. Snit fit here) does exactly that showing that not only are the temperature anomalies from the best and worst USHCN stations identical, but that they also overlay those of the USCRN. Coupled with the excellent agreement over a now thirty year period between the various MSU and surface station temperature anomalies (NOAA, GISSTemp, HadCRUT) sensible people understand that properly constructed global temperature anomalies such as GISSTemp, HADCRUT, RSS, and UAH are yielding an accurate and precise picture of global climate change. That Karl's Trick (TM- ER) is working is the real take home from the Menne paper.
In 2008 Friend Atmoz had pretty well shown that Pielke Sr. and Watts were on a snipe hunt with their surface station picture show. Atmoz first looked in detail at two of the best US HCN stations in Minnesota, finding that the correlation between stations ~220 km apart was greater than 0.9 using the raw data from the USHCN archive. One of these was in a rural location, in a field, and the other at an airport (the Watts boys are moaning about airports not being real good stations, but Atmoz has preemptively shown that this is just shinola). Furthermore Atmoz showed that the correlation between what Watt's called an awful station, Detroit Lakes, and the good stations was also very high (> 0.8)
Atmoz's conclusions were
- This area of the USHCN is over-sampled [ER- holds for all of the US except maybe Alaska]
- CRN ratings as applied by surfacestations.org do not contribute a great deal to yearly average temperatures
- The urban heat island (UHI) may not have a large effect in this region [ER-Note this refers to the anomalies]
- Local heat sources, such as airplanes and air-conditioners, have only a small influence upon the temperature record [ER - at least as far as the anomalies]
This and the graph from Menne at the top shows that Karl's trick is working. Although we only have seven to eight years of the CRN, that is enough to show that neighboring US HCN and CRN stations measure the same high frequency variations in temperature anomalies and it is unlikely that long term trends will differ. It is also a clear validation of GISSTemp's assumption that measurements at locations considerable distance from each other are strongly correlated and that one can make use of that correlation to estimate temperature anomalies at locations which are not directly measured.
But there is more, faithful readers, that is but the half of it. The strength of Karl's Trick (TM ER) is that the CRN was carefully designed. Among other things, NOAA figured out that they only needed ~100 stations to adequately determine climate trends for the US. They were also careful to site CRN stations near USHCN stations, they over instrumented them and more
Contrast this with the helter skelter Surface Stations nonsense. Today Watts moans that
Texas state Climatologist John Nielsen-Gammon suggested way back at 33% of the network surveyed that we had a statistically large enough sample to produce an analysis. I begged to differ then, at 43%, and yes even at 70% when I wrote my booklet “Is the US Surface Temperature Record Reliable?, which contained no temperature analysis, only a census of stations by rating.Even at 33% there were more than enough stations. So how do we interpret Watts' lament. Well, if it were Watts alone maybe he should attend an experimental design seminar, but since Roger Pielke Sr. is pulling the strings, this can only be seen as an attempt to put off the evil day they are now confronting. Is this evidence of bad faith, why yes.
The problem is known as the “low hanging fruit problem”. You see this project was done on an ad hoc basis, with no specific roadmap on which stations to acquire. This was necessitated by the social networking (blogging) Dr. Pielke and I employed early in the project to get volunteers. What we ended up getting was a lumpy and poorly spatially distributed dataset because early volunteers would get the stations closest to them, often near or within cities.
The urban stations were well represented in the early dataset, but the rural ones, where we believed the best siting existed, were poorly represented. So naturally, any sort of study early on even with a “significant sample size” would be biased towards urban stations. We also had a distribution problem within CONUS, with much of the great plains and upper midwest not being well represented.This is why I’ve been continuing to collect what some might consider an unusually large sample size, now at 87%.
Surveys can be corrected for population density if you know what the population density is, and area averages are easy to do. Over-representation of urban/suburban stations can thus be corrected for if one really wants to know the answer.
Oh yeah, can't resist pointing out that in 2007 Eli noted that
UPDATE: Following the crumbs left by the mice (Dano and Chuck) in the comments below, Eli observes that this is exactly what Roger's survey is designed to do with its bias, nay more than bias, prejudice for photographing sites close to people, e.g. in developed areas. Folk are going to take pictures of sites near them, so they are going to get a sample heavily tilted towards sites near them. It will be fun to correlate the locations of sites photographed with voting patterns. Of course we have the American speaking bias on top of that.Comments?