Saturday, March 22, 2014

Continuous plagiarism of James Annan needed

William sez people are slamming Roger Pielke Jr. without engaging his arguments. Okay, that's pretty easy in that it's mostly the same old stuff that James Annan answered eight years ago:


This is something I've been meaning to blog about for some time. It comes up a lot in the context of the hurricane wars, over at RPJnr's blog. A recent comment of his provides a nice opening:

[Quotes RPjr lecturing on the null hypothesis tested via detection of a climate signal] 
There is, however, an entirely different but equally valid approach that could also be used from the outset, which is: what is our estimate of the magnitude of the effect? The critical distinction is that the null hypothesis has no particularly priviledged position in this approach.

This distinction between detection and estimation is related to that between a frequentist and Bayesian approach to probability....The answers that these two approaches provide may be very different in any given situation, and neither is necessarily right or wrong a priori, but it is surely self-evident that the Bayesian approach is more relevant to decision-making. If we have any reasonable expectation that certain policies would have particular bad effects, it would be ridiculous to wait until such effects could be shown to have occurred at some arbitrary level of statistical significance (that's not a point specific to climate change, of course).

....It is trivial to create situations in which a currently undetectable effect can be reasonably estimated to be large, and the converse is equally possible - an easily detectable (statistically significant) influence may be wholly irrelevant in practical terms. I suspect that this forms a large part of the difference in presentation between various parties in the hurricane debate - the evidence may not yet rule out the null hypothesis of no effect, but some people estimate that AGW is likely to have a substantial effect (even if the ill-defined error bars on their estimate do not exclude zero). In principle, exactly the same evidence could support both of these conclusions, although I don't personally know enough about hurricanes to make a definitive statement in that particular case.

It is amusing to see Roger, very much at the sharp end of policy-relevant work, promoting the scientifically "pure" but practically less useful detection/frequentist approach rather than the more appropriate estimation/Bayesian angle. It's not surprising, although perhaps a little disappointing, that the IPCC explicitly endorses that view. But by placing the null hypothesis in a priviledged position from which it can only be dislodged by a mountain of observational evidence, this approach provides a strong inbuilt bias for the status quo which cannot be justified on any rational decision-theoretic grounds.
(Emphasis added.)

IMO this needs to be repeated every time RPjr repeats the same tired argument in a new format and a new paper. Or maybe in a shortened format - "who cares about detection, it's estimation that counts." Certainly when Roger says:
When you next hear someone tell you that worthy and useful efforts to mitigate climate change will lead to fewer natural disasters, remember these numbers and instead focus on what we can control.
You know he's being disingenuous and that everything he said before that about detection is irrelevant to whether disasters are reasons to do something to control climate change.

So William says "[RPjr's work] addresses the question 'is climate change going to cause disasters so expensive that we'd be better off not changing the climate *because of that*'?" Well, it depends. In his academic work, it doesn't address that question, at all, it's about detection. When he turns to a public venue, then he uses the same stuff to make very questionable policy claims.

12 comments:

And Then There's Physics said...

What James Annan writes is very interesting. I'm not particularly expert at statistics, Bayesian in particular, but how some use the null hypothesis has been rather frustrating. Especially those who will use it to claim there is no trend, rather than that they can't rule out no trend at some level o significance. Also, I would have thought that we could have used Bayesian inference to estimate whether or not the trends (or lack thereof) in certain types of events are consistent with what we would expect at this time.

Roger Jones said...

Nor can those arguments about exposure driving disaster response be falsified in the time required to make a sensible decision, so a Bayesian approach is the only practical analytic method. Performance thresholds based on a value premise (monetary, ethical) can also contribute to Bayesian reasoning.

RPJr and colleagues argued for exposure in Australian wildfires when we have clear attribution of some of the underlying climatic drivers. But noooo, it couldn't be both, in the world of the single cause fallacy.

Tom Dayton said...

The obsession with statistics continues to frustrate the pellets out of me. Statistics--frequentist or Bayesian--are merely *some* of the tools that are appropriate to use in making decisions. That is true in science and in every other field.

willard said...

Yes, but objective bayesianism, might reply Nic.

The Old Man is back said...

Ah: old proverb: don't check the seismograph with your back to a tsunami..

Anonymous said...

Eli, ""who cares about detection, it's estimation that counts."

With over 95% of models shown to unequivocally wrong, what is it Dear Hare that gives you so much confidence that your estimates will have any credibility?

Steve Bloom said...

I don't know about Eli, but for me it's the mid-Pliocene, deer anon, with CO2 about the same as present and temps 2-3C higher than pre-industrial. Models (in this instance earth system models, not GCMs, since all feedbacks must be included)) can more or less replicate it, but they can't manage the transition from current climate to a Pliocene-like one. Some deer may choose to take comfort in those headlights, but I for one do not.

Jim Pettit said...

Wrote the ever-courageous Anonymous:

"With over 95% of models shown to [be] unequivocally wrong, what is it Dear Hare that gives you so much confidence that your estimates will have any credibility?"

Please list the names of those models that have been "shown" to be "unequivocally wrong". Also, please link to the peer-reviewed article(s) that refuted them. For I've heard some disparage the models before, but I've yet to see them provide anything as backup but some tired old wonky "blog science". I hope you can do better.

Can you?

Susan Anderson said...

I'll put this in my "unsent angry letters" file.

http://www.nytimes.com/2014/03/23/opinion/sunday/the-lost-art-of-the-unsent-angry-letter.html

But seriously, Pielke Jr.'s dishonesty, I believe, begins with himself. It's tragic that self-deception is capable of doing so much harm.

Susan Anderson said...

For equal opportunity insults, this collection is quite revealing:

http://www.buzzfeed.com/climatebrad/pielked-111-ways-nate-silvers-climate-guy-attack-gt89

Anonymous said...

Snnow Bunny points out:

Pielke's statistics and explanations seem at large variance with Munich Re's (reinsurer) data comparing weather and climate events to geophysical disasters

https://www.munichre.com/touch/site/touchnaturalhazards/get/documents_E2138584162/mr/assetpool.shared/Documents/5_Touch/Natural%20Hazards/NatCatNews/2013-natural-catastrophe-year-in-review-en.pdf

Thomas Lee Elifritz said...

over 95% of models shown to unequivocally wrong

He heard it on the internet somewhere, from somebody who heard it from Roy Spencer on February 10th. How could Roy Spencer be wrong?