Sunday, March 23, 2014

Who You Gonna Trust, Models or Data?

Paul Krugman makes a useful point at his already established blog

It’s not the reliance on data; numbers can be good, and can even be revelatory. But data never tell a story on their own. They need to be viewed through the lens of some kind of model, and it’s very important to do your best to get a good model. And that usually means turning to experts in whatever field you’re addressing.
because, if nothing else there are things about the data that they know that you do not.  Now Krugman goes on but Eli would like to pause here and, as he did at the NYTimes and discuss how data is not always right.

Data without a good model is numerical drivel. Statistical analysis without a theoretical basis is simply too unrestrained and can be bent to any will. A major disaster of the last years have been the rise of freakonomics and "scientific forecasting" driven by "Other Hand for Hire Experts"

When data and theory disagree, it can as well be the data as the theory. The disagreement is a sign that both need work but if the theory is working for a whole lot of other stuff including things like conservation of energy as well as other data sets, start working on the data first.

This, of course, is what happened with the MSU data.  A couple of guys (Spencer and Christy) had a bright idea, but implementation was. . . difficult and their first version met their secret (ok, now well known) desires. But climate scientists were suspicious because a) there were several data sets that disagreed b) a lot of work had gone into thinking about problems with those data sets (Hi Tony, Roger Sr. neglected to tell you about that, didn't he) and c) there were theoretical models including well established physics that disagreed with the original MSU decline.  Of course, Spencer and Christy dug in to defend the decline, but eventually and a couple of NRC panels later, the errors were found.   The same thing happened with the SORCE solar insolation measurements and the back and forth was not collegial. 

A friend of the blog put it plainly in an email, blind application of statistics, without understanding underlying science is dangerous.  In the black box, rising sea levels clearly increase global temperatures. 

A couple of days ago, Eli got into it with James and Carrick about who do you trust, the data or the model.  Summing up the Rabett pointed out that flyspecking rates in spotty data over short periods and small areas is inherently futile. Mom Rabett taught Eli to avoid taking derivatives of noisy data.   

It was kind of fun watching them get tangled up because up at the top, in response to James' going on about how Lewis might fit into the category of a reasonable match to the data  Eli had written
EliRabett said...
What often also gets shoved under the rug is that some of the observations are chancy. Probably not so much with temperatures (except for coverage issues see Cowtran and Way), but as a recent note by Trenberth et al pointed out precipitation records are in need of a major combing out and reconciliation.
with the reply
James Annan said...
Well, I think these days most people are aware of the temps issues and factor them into the comparison. At least, they should. Cowtan and Way is probably good enough that residual issues are unimportant. Precip and other things, I agree it's a bit more vague.
This lesson was jammed home by And Then who put up a simple one dimensional two box (ocean and surface) model using known forcings for global temperature and a slab ocean.
The solid line is the model, the dashed line the data for global temperature anomaly, or the dashed line is the data, the dotted line the model.  At this level of agreement it does not matter.  The pause is the . . . ???  What And Then, of course, needs to show is the envelope of outcomes when the forcings are varied within their uncertainties.  That is the envelope that any random walk, "scientific forecast" has to be compared with, not an unrestrained statistical model, a point made, perhaps not as openly, by Rasmus Benestad many years ago at Real Climate.  Polite is nice, but there are times when in your face is needed.

21 comments:

And Then There's Physics said...

I also noticed Krugman's comment about data. It frustrates me no end, when people claim some bit of data as some kind of fact that proves whatever it is they're trying to prove. As you quite rightly say, Data without a good model is numerical drivel.

What And Then, of course, needs to show is the envelope of outcomes when the forcings are varied within their uncertainties.
Yes, that would be the next step. Maybe I should try, but I may have to get some actual work done first :-)

Susan Anderson said...

While organizing and understanding data with model(s) is a useful tool, I'd go a step further and say those models are still only an approximation of the real world. Of course, this argument has been used to prop up the unmitigated bullshitty nonsense that is promoted by the nasty and deluded and their dupes, the world continues to be the closest approximation to reality we know (maya notwithstanding).

Lotharsson said...

"...blind application of statistics, without understanding underlying science is dangerous."

Deltoid used to have a classic case of that kind of thing in the personage of Tim Curtin (passed away now but his classic threads live on), and every now and then someone else would try their hand at the same.

It is truly amazing what unphysical conclusions one can draw from a statistical model if one has a mind to ;-)

And Then There's Physics said...

Susan,
While organizing and understanding data with model(s) is a useful tool, I'd go a step further and say those models are still only an approximation of the real world.
I guess it's - in some sense - circular. Models are only approximations of the real world and require data so as to determine their validity. Data are measurements of the real world that require some kind of modelling so as to understand what these measurements are actually telling us about the real world.

You're right, though, that semantics about the validity of models has been used to prop up unmitigated bullshitty nonsense :-)

John Mashey said...

Yes.
1) Back when I got interested in this (not as far back as some bunnies), I gathered about a dozen books to read, including IPCC at one end and Fred Singer's "Hot Talk, Cold Science" at the other. AS any good skeptic might, I made a list of issues raised against the consensus, and checked them out. Most didn't last very long, the last being the UAH vs groundstation discrepancy, where the possiblities were:
a)Ground station average was off, possibly via partial coverage.
b) Satellties were off.
c) Each was off for various reasons and reality was somewhere between.

2) By 2005, the answer to that was pretty clear, thanks to RSS.

3) But in doing the PDF @ Fakery 2, I went through a decade of Heartland Environment and Climate News (not really fun), which always featured the UAH graphs. (see pp.108-109 for discussion.) They usually wrote:

"Each month, Earth Track updates the global averaged satellite
measurements of the Earth’s temperature. These numbers are
important because they are real
—not projections, forecasts, or
guesses. Global satellite measurements are made from a series of
orbiting platforms that sense the average temperature in various
atmospheric layers."

Anonymous said...

It's a common misconception that crops up in a number of ways. The idea that gathering and analyzing the data is simple and straightforward, and any naive graphing of raw data can disprove sophisticated and well-supported ideas.

I suspect this misunderstanding is what lies behind the claim that GHCN data is being "altered" to make the past look cooler every time a new revision to the data processing comes out. People think that the data are what they are, and that is purely honest. Any adjustments must be an attempt to retroactively hide the real information instead of deal with problems known to exist in it.

It's especially frustrating when someone trots out the old Feynman quote about experiment and theory. It has to be realized that experimental data is only as good as the quality of measurement. It can't be assumed that it's perfectly valid all the time, the first time.

-WheelsOC

a_ray_in_dilbert_space said...

On the door of a colleague:

Nobody believes the results of a model except the modeler; everybody believes data except for the guy who took the data.

John Mashey said...

Is this was why people suspected that those faster-than-light neutrinos weren't? :-)

E. Swanson said...

I think that lots of folks who look to Spencer and Christy's UAH TLT don't understand that it's based on a model. Allow me to repeat what I posted on Anonymous And Then There's Physics a couple of weeks ago before being buried under a blast of trollish gibberish:
---
To understand what Spencer and Christy did, one needs to read their old reports. This one gives their first presentation of the TLT:

Spencer, R.W., J. R. Christy, Precision and radiosonde validation of satellite gridpoint temperature anomalies, Part II: A Tropospheric retrieval and trends during 1979-90., J. Climate 5, 858-866, 1992b

The MSU instruments scanned cross track with 11 scan positions, #6 being nadir. The TLT used the data from channel 2, combining some of the 11 positions with this equation:
TL2T = (T3 + T4 + T8 + T9) - 0.75(T1 + T2 + T10 + T11)
Defining:
Oranges = (T3 + T4 + T8 + T9 ) / 4
Apples = (T1 + T2 + T10 + T11) / 4,
We see this result:
T2LT = 4*Oranges - 3*Apples
Or, rearranging things a bit:
T2LT = Oranges + 3*(Oranges - Apples)

From this equation, it's immediately apparent that the TLT uses the outer 2 scan positions on each side to "correct" the data from the middle 2 scan positions on each side and completely ignores the middle 3 scan positions.

My big question is the value of the weighting for this combination. Why is a value of 3 used, (as in 3.000000) instead of 2.6 or 3.3 (or whatever). How did they arrive at this value and does that value happen to depend on the use of the US Standard Atmosphere. To my knowledge, they have not provided a published explanation. Then, how well does their fitting process work in other real world situations where the lapse rate is different and there are clouds and moisture? Next, one wonders whether the scaling value should be different for different conditions, varying both with latitude and season. Since I can't read their minds, I have no way to answer my questions. This might make for an interesting study, if one had the time and funding.
---
After posting this, I (finally) looked at the MODTRAN web page, which allows one to compute atmospheric transmission under different conditions and also calculates the lapse rates for those conditions. My question remains, how well does the UAH TLT calculation work out under different lapse rate situations? Would that adjustment factor of 3.000 work for the tropics or the poles and what happens in different seasons?

Anonymous said...

"the last being the UAH vs groundstation discrepancy"

Of course, that discrepancy continues.

The MSU lower troposphere indicates less warming than the surface.
And the MSU middle troposphere indicates less warming than the lower troposphere.

That is true for both UAH and RSS

It's also true for the radiosonde data as well.

That doesn't mean there's no global warming - there is.

But it does indicate that some of the energy that the models want to keep in the troposphere is leaking out.

Eunice

EliRabett said...

Eric, given that Prabhakara's nadir scanning reconstruction was a lot better than S&C v1, that says that a great deal of baby is going out with the bathwater

EliRabett said...

Eunice, Christy and Spencers UAH Mid Troposphere has a lot of lower Stratosphere contamination which is cooling.

Radiosonde data is realllllly shaky. Ask anyone who flies them.

And, oh yes, S&C trends are lower than RSS and Prabhakara. . .

Anonymous said...

"Eunice, Christy and Spencers UAH Mid Troposphere has a lot of lower Stratosphere contamination which is cooling."

I certainly don't have a unyielding faith in any of the data sets -or- models.

But as far as LS contamination, LS has close to zero trend for the last 20 years, so on a relative basis, the last 20 years should have no spurious effect. But there's another test of stratospheric influence - the volcanic eruptions. Were the stratospheric influence a big effect, one might expect it to exhibit similar spikes in the trop data as appeared in the strat data for the eruptions. They did not.

"Radiosonde data is realllllly shaky. Ask anyone who flies them."

Long ago, Eunice launched a few. Then again in the 1990s ( yikes, that's still long ago ) Eunice launched the dummy-proof Vaisalas ( which I'm assuming have continued to improve ).
No doubt, many errors proliferate the records. Of course, Eunice took surface obs as well and again, no doubt, many errors proliferate.

"And, oh yes, S&C trends are lower than RSS and Prabhakara. . ."

That's true for the middle troposphere trend, but not for the lower troposphere, where the RSS trend is the lower.

E. Swanson said...

Anonymous said:
"But as far as LS contamination, LS has close to zero trend for the last 20 years, so on a relative basis, the last 20 years should have no spurious effect. But there's another test of stratospheric influence - the volcanic eruptions. Were the stratospheric influence a big effect, one might expect it to exhibit similar spikes in the trop data as appeared in the strat data for the eruptions. They did not."

The volcanic influence on the troposphere would be of opposite sign to that in the stratosphere. One might thus expect that the mid tropospheric warming in the TMT measurements would be offset by the cooling component from the stratosphere. The problem of stratospheric influence on the TMT isn't new. For example, read:

Fu Q, Johanson CM, Warren SG, Seidel DJ (2004) Contribution of stratospheric cooling
to satellite-inferred tropospheric temperature trends. Nature 429(6987):55–58.

Or, for a recent discussion, see:

Santer, et al. (2012), "Identifying human influences on atmospheric temperature", PNAS, www.pnas.org/cgi/doi/10.1073/pnas.1210514109



Aaron said...

My model says that AGW is like a pack of junkyard dogs. The atmosphere is like a dog's tail. Glaciers are like a dog's ears. And the oceans are the rest of the dogs.

Certainly, you have to watch a dog's tail and ears, but what counts are the dog's teeth and claws.

Any model that does not account for the fact that the dogs have long sharp teeth is not worth much.

Our data on ocean heat content is sparse. We are not doing a good job of keeping track of where all junkyard dog's teeth are and what they are doing.

While folks argue over what a junkyard dog's tail and ears mean (e.g., details of satellite and station data), they are about to get bit. It does not matter which side of the argument you are on, when the pack of dogs start biting, everyone is going to get bit.

My expalanation of the "pause" is that the heat is going into warming the oceans and melting ice. I think we are at an equilibrium point where a lot of heat is going into warming and melting vast volumes of ice and permafrost.

Chris G said...

> Data without a good model is numerical drivel. Statistical analysis without a theoretical basis is simply too unrestrained and can be bent to any will. A major disaster of the last years have been the rise of freakonomics and "scientific forecasting" driven by "Other Hand for Hire Experts"

I'm reminded of John Tukey's observation: "The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."

Anonymous said...

Statements like:

"This lesson was jammed home by And Then who put up a simple one dimensional two box (ocean and surface) model using known forcings for global temperature and a slab ocean."

Are made impenetrably obscure (for your newer readers) by referring to people by monikers like "And Then".

If on occasion you would provide a secret decoder ring it would make reading your blog a significantly less frustrating experience. I am acting under the assumption that you would like to attract the occasional new reader.

EliRabett said...

Added some links, usually do. Eli

THE CLIMATE WARS said...

Kudos to John Mashey for wading through a decade of drivel . A better use of his time might have been to essay a descent into the inferno of a Heartland Conference, where it all can be heard in full cry.

stupid spam bot said...

some are neutral that is a good thing

well more or less

FIRST: WHERE is the energy in ergs or in btu's in the earth system

what role have the 100 million of cubic milles of dense water at 4ºc

Ergassion with low calories said...

the model for temperature anomaly in the gaseous system is a real indicator of anomalies that going on in the two or three or more miles of water?

what is the deep ocean inertia or the tendency to resist changes ?

what kind of model can resume all the data in the earth system

and the oceanic ridges is past times have made oceanic rises much higher than any ice melting

what are the real measurments in the oceanic ridges that are thousand of miles long and have thermal outputs that have changed sometimes in a matter of a few centuries or less than that