Tuesday, February 09, 2016

Nigel Persaud Dons His Eyeshade and Audits the Auditor


Some time ago Nigel Persaud took up the trade of auditor and inquired about this and that.  Somebunny known here and abouts took up the challenge, only to find that careful examination showed that most of the inquiries were, shall Eli say it, perhaps about nothing at all, but that there were a couple of lacuna, things missing.  They eventually were noted in the appropriate place.

On the scale of errors, there are blunders, there are errors, there is over clever data selection, and there is ignorance.  There might be more, Eli will await word from Willard, but blunders occupy a special and deep circle of academic hell.

One of the auditors, Ross McKitrick, has an impressive case of the blunders.  Tim Lambert made a hobby of finding them.  There was, of course the famous confusion of degrees with radians in Michaels and McKitrick 2004 (MM04) and much much more.

A bunch of the lab mice, Rasmus Benestad, Dana Nuccitelli, Stephan Lewandowsky, Katherine Hayhoe, Hans Olav Hygen, Rob van Dorland, and John Cook have taken MM04 under the microscope as an example of, well, pretty much all of category of errors discussed in their recent paper.  They further respond to the McKitrick beasts wails of hurt in a recent Real Climate post.

There is one crucial point that McKitrick seems to have missed, which is that nearby temperature trends are related because the trend varies smoothly over space.

An important point made in (Benestad et al., 2015) was that a large portion of the data in the analysis of McKitrick and Michaels (2004) came from within the same country and involved common information for the economic statistics (GDP, etc). In technical terms, we say that there were dependencies within the sample of data points.
Bob Grumbine had pretty well nailed this over a decade ago after looking at the original version of MM04
He was fooling around with correlating per capita income with the observed temperature changes. He concluded that the warming was a figment of climatologists imaginations, as there was a correlation between money and warming. ‘Obviously’ this had to be due to wealth creating the warming in the dataset, rather than any climate change—his conclusion.
Along the way he:
1) selected a subset of temperature records
1a) without using a random method
1b) without paying attention to spatial distribution
1c) without ensuring that the records were far enough apart to be independant—ok, I shouldn’t say ‘he’ did it, because he didn’t. He blindly took a selection that his student made and which was—to my eyes—distributed quite peculiarly.
2) Treated the records as being independant (I know William knows this, but for some other folks: Surface temperature records are correlated across fairly substantial distances—a few hundred km. This is what makes paleoreconstructions possible, and what makes it possible to initialize global numerical weather prediction models with so few observations.)
3) Ignored that we do expect, and have reason to expect that the warming will be higher in higher latitudes
4) Ignored that the wealthy countries are at higher latitudes
Hence my calling it fooling around rather than work or study. He was, he said, submitting that pile of tripe* to a journal. *pile of tripe being my term, not his.
and
His main conclusion was regarding climate change—namely that there isn’t any. His secondary conclusion was that climate people studying climate data were idiots. Neither of those is a statement of economics, so my knowledge of economics is irrelevant (though, in matter of fact, it is far greater than his knowledge of climate; this says little, as his displayed level doesn’t challenge a bright jr. high student.).
Now this discussion of McKitrick and Michaels stirred a memory in Eli's rememberer, a comment that Steve Mosher had made when a follow on paper to MM04 and MM07 was being featured by Judith Curry.
I downloaded his data. In his data package he has a spreadsheet named MMJGR07.csv.
This contains his input data of things like population, GDP etc.

In line 195 he has the following data

Latitude = -42.5
Longitude = -7.5
Population in 1979 =56.242
Population in 1989 = 57.358
Population in 1999 = 59.11
Land = 240940 In his code he performs the following calculation

SURFACE PROCESSES: % growth population, income, GDP & Coal use // land is in sq km, pop is in millions; scale popden to persons/km2 // gdp is in trillions; gdpden is in $millions/km2

generate p79 = 1000000*pop79/land
generate p99 = 1000000*pop99/land
So, at latitude -42,5, Longitude -7.5 he has a 1979 population of 56 million people and 240940 sq km and a population density in the middle of the ocean that is higher than 50% of the places on land. Weird.
A few others looked at the spread sheet and saw that well in the words of another McKitrick was spreading the population and GDP of France across a couple of small islands in the Pacific.

WebHubTelescope summed it up
Whether it is getting radians and degrees mixed up, or doing elementary sanity checks on the data, this stuff isn’t that hard to verify for quality. Could it be that some people just don’t have the feel for the data? Or that they rely too much on blindly shoving numbers into stats packages? McKitrick’s paper has that sheen of mathematical formalism that can obscure the fact that he lacks some the skill of a practical analyst. Beats me as to his real skill level, or that he is just sloppy. 
 As far as Eli can see this "event" was only discussed in one other place, Marcel Crok's blog by Jos Hagelaars. 

Today Eli went and downloaded the file.  Just a quick pass through shows that of the 25/469 stations south of -40.0 latitude, 4 are UK territories and are associated with the population and GDP of the UK and the south pacific data is dominated by french territories.  Oh yeah, the Faroes have the population of Denmark.

Said file is available on request with a donation to the Ancient Bunny Fund. 

14 comments:

neverendingaudit said...

Judy's lede was ominous:

What do these three papers share in common? All were written by scientists well outside the fields of atmospheric and climate science.

https://judithcurry.com/2012/06/21/three-new-papers-on-interpreting-temperature-trends

PS: One day, I'll clean up my tags so that you would be able to refer to ZeVeryBest of the Auditor. Meanwhile, please rest assured that theories of error are tough to build. I suspect that classifying errors is an intractable problem.

BBD said...

All were written by scientists well outside the fields of atmospheric and climate science.

Sounds like Nic Lewis. So a timely stroll down memory lane, looking at the lessons of history. But surely only a coincidence.

caerbannog said...


2) Treated the records as being independant (I know William knows this, but for some other folks: Surface temperature records are correlated across fairly substantial distances—a few hundred km. This is what makes paleoreconstructions possible, and what makes it possible to initialize global numerical weather prediction models with so few observations.)


Correlated over long distances indeed!

This is a bit off-topic, but worth waving around -- here is what you get when you compute global-average temperatures from just 30 randomly-selected GHCN stations (all rural): https://drive.google.com/open?id=0B0pXYsr8qYS6Y3hyQ1ZnamxVMWM

30-rural-station raw and adjusted data results are shown in green and blue, respectively. The NASA "meteorological stations" results (computed from ~6,000 stations in the GHCN adjusted data-set) are shown in red.

The bottom plot in the image file above shows how many of the 30 randomly-selected stations actually reported data for any given year (note: if stations had missing months, they were pro-rated in the count -- i.e. a station with 6 months of data in a year was counted as "half a station" for that year).

Note that for most of the 1880-2015 time-period, fewer than 25 stations reported data. But that was still enough to "nail" the trend seen in the NASA results (albeit with more noise).

For the purposes of computing global-average surface warming, the GHCN data-set is *incredibly* oversampled and robust.

Victor Venema said...

There is a paper similar to MM04 by Jos de Laat. Did anyone ever review that one? After reading MM04, the devastating reviews and Gavin Schmidt's reply article, I no longer felt like reading another paper in this horror series. Should I?

caerbannog, when you take 30 random stations, many of them will in Europe or the USA. Your result would be even stronger if you divide the Earth in about 30 areas and then take one random station from it.

Chase Stoudt said...

A new number is needed, perhaps the McKitrick radius of deformation?

caerbannog said...


caerbannog, when you take 30 random stations, many of them will in Europe or the USA. Your result would be even stronger if you divide the Earth in about 30 areas and then take one random station from it.


Actually, I should say "randomly hand-picked while trying to achieve maximum global coverage" stations.

I put together an app that displays clickable station locations on a Google map display. The app computes global-average results based on the stations I click. With all the recent denier attacks on the NASA/NOAA global temperature work, I'm getting lots more mileage out of that app than I ever anticipated. ;)

To generate the results I posted above, I tried to distribute my random station clicks as uniformly as possible. I tried to avoid overweighting any particular region in my selection of stations.

I've done it a bunch of times, with different sets of "randomly-hand-picked" stations, and I've gotten similar results every time.

caerbannog said...


caerbannog, when you take 30 random stations, many of them will in Europe or the USA. Your result would be even stronger if you divide the Earth in about 30 areas and then take one random station from it.


Just one more item -- before I figured out how to add the Google map graphical front end, I did generate results where I divided up the Earth into roughly equal grid-cells and selected the longest-record station in each grid-cell.

Got the same warming trend. I just can't figure out how "not" to get that darned warming trend, no matter what I do.

Cherry-picking has its limits when you go for complete global coverage. Go figure. ;)

EliRabett said...

Hey, as a service to blogdom, how about using the list of stations in MM04 and see what you get. There has always been the issue of how representative the cherry pick was

http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/mckitrick-michaels-cr04.pdf

David B. Benson said...

Ethon! Calling Ethon...

caerbannog said...


Hey, as a service to blogdom, how about using the list of stations in MM04 and see what you get. There has always been the issue of how representative the cherry pick was...


Here are results from 40 stations (mostly from the beginning of the list, not well distributed spatially): https://drive.google.com/file/d/0B0pXYsr8qYS6ZFNCSE9LUm85cUk/view?usp=sharing

The warming signal emerged very quickly as I mouse-click-added stations.

The whole set would almost certainly produce a very close match to the NASA "meteorological stations" results.

If you want to keep a denier busy for a very long time, tell him/her to pick any 30 stations distributed around the world that when averaged together would produce a cooling trend (adjusted or raw data).

More fun than putting said denier in a round room and telling him/her that there's a $20 bill in the corner...

EliRabett said...

Has the Pielke beast emerged from his lair?

steven said...


ha,

Ya. I followed up a bit with Ross and he said the mistakes made no difference. I reminded him that this reply did not sit well with my auditing spirit.. ahem.

In any case, the mistake was simple enough, some people not used to temperature records dont understand that the "country" field is not a geographic bit of information, but rather a legal bit.. as in who owns this land. basically I loaded his data into mapping programs and was shocked to see huge density's in places where I knew density was low ( like antartica)

At some point I will try to redo Ross's approach.. But regressing
temperature against literacy has always struck me as well... illiterate.

I could not get my mind around how that made any sense--dimensionally speaking.

When we met in lisbon I asked ross how many different regressors he tried..
incredibly he said he picked the regressors he picked on the first go!
who needs stepwise regression when you can guess the right regressors the first time!




Kevin O'Neill said...

Steven - you may appreciate Making Inferences about Tropics, Germs, and Crops by Dietrich Vollrath.

My favorite paragraph: "Repeat after me. Failure to reject the null does not mean the null is true. Failure to reject the null does not mean the null is true. Failure to reject the null does not mean the null is true. Failure to reject the null does not mean the null is true. Failure to reject the null does not mean the null is true. Failure to reject the null does not mean the null is true. Failure to reject the null does not mean the null is true. Failure to reject the null does not mean the null is true."

I'm not sure what he's trying to say there ...... :)

Victor Venema said...

caerbannog, if the points are spread over the globe, then I would like to make a post out of your comments at Ars Technica. I think it is illustrative that you do not need much stations for the long term trend. Many stations are important to get short-term means right and decadal variability, but the longer the time scales, the larger the spatial correlations and the less stations you need.

I hope that is okay with you.