Paul Krugman makes a useful point at his already established blog
It’s not the reliance on data; numbers can be good, and can even be revelatory. But data never tell a story on their own. They need to be viewed through the lens of some kind of model, and it’s very important to do your best to get a good model. And that usually means turning to experts in whatever field you’re addressing.because, if nothing else there are things about the data that they know that you do not. Now Krugman goes on but Eli would like to pause here and, as he did at the NYTimes and discuss how data is not always right.
Data without a good model is numerical drivel. Statistical analysis without a theoretical basis is simply too unrestrained and can be bent to any will. A major disaster of the last years have been the rise of freakonomics and "scientific forecasting" driven by "Other Hand for Hire Experts"
When data and theory disagree, it can as well be the data as the theory. The disagreement is a sign that both need work but if the theory is working for a whole lot of other stuff including things like conservation of energy as well as other data sets, start working on the data first.
This, of course, is what happened with the MSU data. A couple of guys (Spencer and Christy) had a bright idea, but implementation was. . . difficult and their first version met their secret (ok, now well known) desires. But climate scientists were suspicious because a) there were several data sets that disagreed b) a lot of work had gone into thinking about problems with those data sets (Hi Tony, Roger Sr. neglected to tell you about that, didn't he) and c) there were theoretical models including well established physics that disagreed with the original MSU decline. Of course, Spencer and Christy dug in to defend the decline, but eventually and a couple of NRC panels later, the errors were found. The same thing happened with the SORCE solar insolation measurements and the back and forth was not collegial.
A friend of the blog put it plainly in an email, blind application of statistics, without understanding underlying science is dangerous. In the black box, rising sea levels clearly increase global temperatures.
A couple of days ago, Eli got into it with James and Carrick about who do you trust, the data or the model. Summing up the Rabett pointed out that flyspecking rates in spotty data over short periods and small areas is inherently futile. Mom Rabett taught Eli to avoid taking derivatives of noisy data.
It was kind of fun watching them get tangled up because up at the top, in response to James' going on about how Lewis might fit into the category of a reasonable match to the data Eli had written
EliRabett said...with the reply
What often also gets shoved under the rug is that some of the observations are chancy. Probably not so much with temperatures (except for coverage issues see Cowtran and Way), but as a recent note by Trenberth et al pointed out precipitation records are in need of a major combing out and reconciliation.
James Annan said...This lesson was jammed home by And Then who put up a simple one dimensional two box (ocean and surface) model using known forcings for global temperature and a slab ocean.
Well, I think these days most people are aware of the temps issues and factor them into the comparison. At least, they should. Cowtan and Way is probably good enough that residual issues are unimportant. Precip and other things, I agree it's a bit more vague.
Rasmus Benestad many years ago at Real Climate. Polite is nice, but there are times when in your face is needed.