Saturday, February 09, 2013

California dreaming of offsets

I attended a fascinating-to-me workshop* Tuesday about California using international offsets in the form of reducing deforestation and degradation (REDD) in Acre Brazil and Chiapas Mexico.  The webinar's online, watch me babble a question if you want in the morning session tying their work to our water district's 2020 climate neutrality goal (third video down, at the 02:40:15 time).

The short version is that a relatively tiny fraction of California's effort to get to 1990 emission levels by 2020 would come through international forestry offsets, but even that tiny amount could be a billion dollars of financing, much larger than anything done to date and a potential kickstart to efforts in those two provinces and elsewhere.  This is truly new - the European cap-and-trade doesn't do it.

The meeting was of a group that provides technical recommendations to California and the other provinces/states, so whether they'll be followed is unclear, but they cautioned about giving offsets for actions that increase carbon storage on degraded and cleared land, because that might create incentives to log the land so it can be "restored".

Much or most of the discussion focused on measurement as a key to ensuring the offsets are real additions to what would have happened anyway.  The scientists are very confident that they can measure forest carbon storage accurately and not too expensively via satellite and airborne lidar.  The tricky part though is measuring what would've happened in the absence of offsets.

Passing over the possibility of time machines travelling to alternative universes without offsets for comparison purposes, they instead proposed reference levels of forest losses based on previous ten-year historical averages, projected into the future with some modifications and safeguards (slightly reduced levels available as offsets, further declining over time).  Reductions of emissions in subsequent years compared to reference levels, after adjustments, are the available offsets.  I'm a little unclear on the timing, but I think the Californians buy the offsets first in anticipation that they'll work, then the REDD program does its stuff and is verified.  I do know that if the buyer is liable if the offsets don't work and has to find carbon savings elsewhere in that case.

The beauty of this is that functions on the provincial level, so it's widescale (less leakage) and tracks provincial results instead of trying to measure every little project and assign carbon savings accordingly.  The controversy (or one of the controversies) is that the offset payments go to provincial governments, so how that money could reach the rural communities is an issue.  Safeguards for that will be discussed at a later meeting, and they have the concept of "nesting" project level credits into the provincial system.

A lot is riding on this, both in terms of global carbon emissions and our global ecology.  There's some danger of course, but also some tremendous opportunity.


*Fascinating enough that I may be interested in this area as a career field, so maybe I might have some bias.

Guides for the Perplexed

NCAR (National Center for Atmospheric Research) has come up with an interesting variation.  Their Climate Data Guides provide expert evaluation of data series.  The graph below describes change in the heat content of the oceans from 1500 to 10 m during the period that the Argo floats have been in opertion.


The guides summarize strengths and weaknesses of the data set as well as expert appreciations. 

There is a little note at the bottom of the page

Click here (log in required) if you are an expert on Ocean heat content for 10-1500m depth based on Argo and would like to contribute Expert User Guidance, Expert Developer Guidance, or Figures to be featured on this page. Corrections and comments are also welcome.

Eli, poor bunny, is banned, but so is Tamino
We encourage constructive comments and interaction. Please remember these guidelines:  Users agree to UCAR's Terms of Use and Privacy Policy. For this site, the comments are visible without logging in, but only registered and logged in users may post comments and replies. You may post questions about the dataset, share your experiences using the dataset, provide links to relevant resources, or list your publications that use the dataset. All comments will be signed with your authentic name (no pseudonyms or anonymous comments) and affiliation associated with your user profile and will be moderated; therefore they will not be immediately posted. Questions will be answered as time permits, and we encourage dataset developers or other users to chime in with answers. If you have enough expertise to offer that merits a new page in the Climate Data Guide, please see our contributions page

Friday, February 08, 2013

Open for Comment

Steve Bloom points to Comment on “Polynomial cointegration tests of anthropogenic impact on global warming by Beenstock et al (2012)".  Some fallacies in econometric modelling of climate change" D. F. Hendry and F. Pretis.

Now Eli has to admit a bit of shame about this one.  The bunny got tangled up in the mathturbation.  James, in his usual laconic way was the closest

Were I Judith Curry, I would probably be saying "wow" at this stage. Alternatively, it could just be some dross that has accidentally found its way into print after having been rejected at least twice at different journals.
The review comments are interesting, to say the least. Reviewer #2, in particular, seems awfully keen on a number of silly sceptic claims that have been presented in recent years.

I suppose it just goes to show that you can fool at least one person sometimes, and if that person happens to be a journal editor, you're in luck.
Those of you who remember Eli and Socrates going around on statistics know the answer
 [Eli] In other words, if you have a good idea of the answer they can help you, but if not you need physics or biology or chemistry or meteorology.
Hendry and Pretis, smart bunnies, didn't look at the econometric analysis, they looked at the data set.  Why you ask, well, anyone who studies blog scientists knows why
In their analysis of temperature and greenhouse gases, Beenstock et al. (2012) present statistical tests that purport to show that those two variables have different integrability properties, and hence cannot be related. The physics of greenhouse gases are well understood, and date from insights in the late 19th century by Arrhenius (1896). He showed that atmospheric temperature change was proportional to the logarithmic change in CO2). Heat enters the Earth’s atmosphere as radiation from the sun, and is re-radiated from the warmed surface to the atmosphere, where greenhouse gases absorb some of that heat. This heat is re-radiated, so some radiation is directed back towards the Earth’s surface. Thus, greater concentrations of greenhouse gases increase the amount of absorption and hence re-radiation. To “establish” otherwise merely prompts the question “where are the errors in the Beenstock et al. analysis?”.
In other words, the Beenstock et al don't know anything about the system they are studying.  In particular, point out that the data series used for greenhouse gas concentrations, is not a single series but a compilation, and that the nature of the data changes over about 1960 from ice cores to atmospheric grab samples
Interacting with unmodelled shifts, measurement errors can can lead to false interpretations of the stationarity properties of data. In the presence of these different measurements and structural changes, a unit-root test on the entire sample could easily not reject the null hypothesis of I(2) even when the data are clearly I(1). Indeed, once we control for these changes, our results (see Tables 1 and 2 below) contradict the findings in Beenstock et al. (2012)


 Once that is done, and one actually looks at the data, it becomes clear that there are two separate  periods during which the properties of the correlation between temperature and forcing changes, roughly divided at 1960, and that Beenstock's analysis depends on an incorrect pooling of data.

Econometrics is a hammer which econometricians apply to all objects, but  it's ALWAYS the science that rules.

Thursday, February 07, 2013

The Cyanobacteria's Friend Publishes

Ray Pierrehumbert has a contribution at Slate which punctures the Saudi America balloon, that US unconventional oil is without practical limit.  Turns out

The market is not laying the foundations for an era of unending oil-based prosperity. The market is pushing inexorably toward investment in expensive technologies to extract the last drop of profit through faster depletion of a resource that's guaranteed to run out. If we're going to invest in expensive energy technologies, it would be better to pick long-term winners rather than guaranteed losers.

The flaws in the abundance narrative for fracked natural gas are much the same as for tight oil, so I won't belabor the point. Certainly, the current natural gas glut has played a welcome role in the reduced growth rate of U.S. carbon dioxide emissions, and the climate benefits of switching from coal to natural gas are abundantly clear. But gas, too, is in a Red Queen's race, and it can't be counted on to last out the next few decades, let alone the century of abundance predicted by some boosters. Temporarily cheap and abundant gas buys us some respite—which we should be using to put decarbonized energy systems in place. It will only do us good if we use this transitional period wisely.
 There were a number of talks at EGU which make Ray look optimistic, but he also provides a link to another article which discusses the attitude shift at AGU that friend McIntyre missed.  The title of this talk being discussed was Brad Werner's Is Earth F*ucked
Why shout out the blunt question on everyone’s mind? Werner explained at the outset of the presentation that it was inspired by friends who are depressed about the future of the planet. “Not so much depressed about all the good science that’s being done all over the world—a lot of it being presented here—about what the future holds,” he clarified, “but by the seeming inability to respond appropriately to it.”

That’s probably an apt description of legions of scientists who have labored for years only to see their findings met with shrugs—or worse.
and the answer, as Eli and others are pointing out
Werner’s title nodded at a question running like an anxious murmur just beneath the surface of this and other presentations at the AGU conference: What is the responsibility of scientists, many of them funded by taxpayer dollars through institutions like the National Science Foundation, to tell us just exactly how f**ked we are? Should scientists be neutral arbiters who provide information but leave the fraught decision-making and cost-benefit analysis to economists and political actors? Or should they engage directly in the political process or even become advocates for policies implied by their scientific findings?
Many years ago, Eli recognized that the "Honest Broker" and "Proper Framing" were merely a strategies to prevent any meaningful action.  The threat is real, obvious and recognized by the people who study the science.
Scientists have been loath to answer such questions in unequivocal terms. Overstepping the perceived boundaries of prudence, objectivity, and statistical error bars can derail a promising career. But, in step with many of the planet's critical systems, that may be quickly changing. Lately more and more scientists seem shaken enough by what their measurements and computer models are telling them (and not just about climate change but also about the global nitrogen cycle, extinction rates, fisheries depletion, etc.) to speak out and endorse specific actions. The most prominent example is NASA climatologist James Hansen, who was so freaked out by his own data that he began agitating several years ago for legislation to rein in carbon emissions. His combination of rigorous research and vigorous advocacy is becoming, if not quite mainstream, somewhat less exotic. A commentary in Nature last month implored scientists to risk tenure and get arrested, if necessary, to promote the political solutions their research tells them are required. Climate researchers Kevin Anderson and Alice Bows recently made an impassioned call on their colleagues to do a better job of communicating the urgency of their findings and to no longer cede the making of policy prescriptions entirely to economists and politicians.  
It is not just the deniers, but the Kool Kidz, the churnalists and their friends that need to be called out.

Wednesday, February 06, 2013

On Priors, Bayesians and Frequentists

A dialog between a bunny and a philospher in which questions of current concern are asked or not asked, and answered or not.  The philosopher will, until the philosopher wishes be anonymous or not.

[Eli]  So every once and then, Eli gets serious, and asks some questions.  In this case about Bayesian statistics.  Andrew Gelman pointed out that

[Andrew Gelman] Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”.

[Eli]  Eli thought the proper designation of Bayesians was the batshit crazy, but no never mind.  The questions revolve around something lower down in that post, and frankly, in a vague attempt not to out his bunnyship as an idiot, Socrates, Eli thought you might be a reasonable spirit to ask.   

[Socrates] Shoot.

[Eli] So here’s Andrew Gelman on Noah Smith:

[Andrew] Smith does get one thing wrong. He writes:

[Noah] When you have a bit of data, but not much, the Frequentist – at least, the classical type of hypothesis testing – basically just throws up its hands and says

[Frequentist] We don’t know.

[Noah] It provides no guidance one way or another as to how to proceed.

[Andrew] If only that were the case! Instead, hypothesis testing typically means that you do what’s necessary to get statistical significance, then you make a very strong claim that might make no sense at all. Statistically significant but stupid. Or, conversely, you slice the data up into little pieces so that no single piece is statistically significant, and then act as if the effect you’re studying is zero.

[Eli] Andy underlines another mistake by Noah, this time when he says:

[Noah] If I have a strong prior, and crappy data, in Bayesian I know exactly what to do; I stick with my priors. In Frequentist, nobody tells me what to do, but what I’ll probably do is weaken my prior based on the fact that I couldn’t find strong support for it.

[Andrew]  This isn’t quite right, for three reasons.

First, a Bayesian doesn’t need to stick with his or her priors, any more than any scientist needs to stick with his or her model. It’s fine—indeed, recommended—to abandon or alter a model that produces implications that don’t make sense (see my paper with Shalizi for a wordy discussion of this point).

Second, the parallelism between “prior” and “data” isn’t quite appropriate. You need a model to link your data to your parameters of interest. It’s a common (and unfortunate) practice in statistics to forget about this model, but of course it could be wrong too. Economists know about this, they do lots of specification checks.

Third, if you have weak data and your prior is informative, this does not imply that your prior should be weakened!

[Eli] Eli's take on all this is that starting with priors (from models/theories/other data sets) which are close to the data set under analysis will result in improved statistical estimates.  The (very old language here) surprisal, the difference between the prior and the posterior, will be small and one may be able to used it to extract meaningful dynamics from under the statistical noise.

[Ms. Rabett, looking into Eli’s eyes] Very meaningful dynamics indeed.

[Eli, keeping his cool] However, if the prior is awful, the result may actually diverge from the underlying statistical information in the data set, so with Bayes, you have to know the answer, or a good approximation to it to make progress, or, as Gelman points out

[Eli, using Andrew’s voice] If the prior is derived from previous work, the data set may be crap, in which case the use of the Bayesian statistics is to identify crap data.

[Eli] So how good is Eli's prior?

[Ms. Rabett]  And posterior, which I admire on occasion.

[Socrates] Gelman's post is brilliant.  I like his blog.  I also like Mayo's.  And not to mention yours.  What's your priors, again?

[Eli] Eli has been brought up on charges by many.  More or less something we used a lot of years ago, taking the prior from theory and applying it to measured data, to see what the theory missed.   Still like that approach.

[Socrates]  Oh, that.  Well, yeah.  Some call this post hoc data mining.  Some call it experimentation. I never understood the concept of post hoc.  Can we really check if econometrists are not peeking at their data before designing their models?

Perhaps Solomon would pronounce my judgement better than me:

[Solomon] Make sure your statistical inference is minimal and all will be well.

[Eli] Not if the theory is done before the experiment.

[Socrates] Hmmm. Some say that if you choose your model after you analyze your data, quite nasty things will happen to the data and you must throw it out. Replace data with brains and you get zombie stories:

[Zombified econometrist]  Must... get.. more... data.

[Eli]  Real science is messy, this is arguing for only doing things when you know the answer before you start.  Is statistics a tool or a means in itself, if it is a tool, why let it run your life?

[Socrates]  Because auditors request it, perhaps.  

[Eli]  They seriously lack rhythm and sound like Hell’s version of karaoke.  All noise, no music.

[Socrates]  Pithy.  Let’s envision this myth of an Hell like Dante’s, but with four circles of accusations, which I’m tempted to characterize via D&D allegiances:

[The Neutral] You're picking cherries with your post hoc method.

[The Chaotic] Your data is just a bunch of cherries anyway!

[The Lawful] You're not following a standard based on any official (e.g. statistical) authority.

[Socrates’ Avatar] You're not following your own standards.

[Socrates]  This sums up most of econometrical concerns, as far as I can see.  When valid, the last argument may be tough to dodge. Since this is my avatar talking through econometrists, I might be biased.

[Eli]  Well ok, you analyze the old data for your prior and then get new data.

The anti-Bayesian about that is that if your new data is wildly different from your old you got a load of splainin to do cause either the prior or the later data is screwed up.

Or you could split the fifty co authors into two groups, one who does the prior and the other who does the data gathering.

The equivalent would be to take the FAR as the prior for the SAR, etc.

[Socrates]  That could be a start, but how exactly do you find new very old proxies, Eli?  Historical data can be scarce.

[Eli]  The journals are full of them, it is an industry, with lots of folks out there digging up old logs, drilling new ones, inventing new tools of analysis and more.

Good solutions to these problems depend on using the right prior distribution, one that properly represents the uncertainty that you probably have about which inputs are relevant, how smooth the function is, how much noise there is in the observations, etc.  In other words you pretty much know the answer.

[Socrates] Easier said than done.  Let’s leave this aside. Since the last time Plato channeled me, Aristotle proved that providing evidence was more substantial.  I rather like this statement by Radford Neal in this presentation:

[Radford] The Bayesian approach takes modeling seriously. A Bayesian model includes a suitable prior distribution for model parameters. If the model/prior are chosen without regard for the actual situation, there is no justification for believing the results of Bayesian inference.

[Socrates] Just under it, there's also a note about the pragmatic compromises.  It's a rather neat intro, which even me can almost understand.  For better sound bites, there’s Cromwell’s rule:

[Dennis Lindsay] Leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.

[Eli]  Eli will give the points on that one.  No one ever got poor betting with cranks against green cheese or the ether.

[Socrates] The name was inspired by Oliver Cromwell’s address to the Church of Scotland:

[Oliver Cromwell] I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

[Socrates] According to this rule, only logical impossibilities should have zero prior.  I believe this rule is in the spirit of your remark about proxies.

[Eli]  I think my point is that Bayesian statistics only works if you have an intelligent prior.  If the prior work is of Dunning Kruegar quality you are screwed.  You will know less after the analysis than before you started it.

[Socrates] More than that: you become affected by DK yourself, and you start to use the theorem to prove the existence of God.

I'll read Gelman's paper.  I feel I already did.  Oh, I just had this reminescence of asking a non-Bayesian philosopher king why he was not Bayesian, and he said:

[Philosopher King]  Beats me.  I just ain’t.  Methinks this is like sexuality.  I liked the first three pages of Gelman.  I agree with his claim about philosophical bayesianism being crap. 

[Socrates]  I'm paraphrasing, even if it looks like Philosopher King’s talking.  Socratic dialogs are a rhetorical trick to have multiple lines of argument.

While I was making you believe that Philosopher King was talking, I searched the Internet (which Plato anticipated in his Phaedo) and found this video lecture, by Michael... Jordan.  Clicking on the titles of the slides makes them appear.

It’s a slam dunk.

[Eli] In other words, if you have a good idea of the answer they can help you, but if not you need physics or biology or chemistry or meteorology.

[Socrates] You always do, but as soon as you put any of that into the prior you have to face the Erinyes.

[Eli] You’ve not told me much, Socrates.  What’s your final answer?

[Socrates] Do I look like a truth machine to you?  Please confer to Yoda:

[Yoda] The Proper Statistics you must use, Eli.  Within it everything is.

[Eli] Eli is but an humble bunny, oh Yoda, how shall he know what to do if Socrates does not tell him.

[Socrates] Us oracles consult for carrots, silly Rabett.

Tuesday, February 05, 2013

The Dagger

It was pretty obvious there was one, and here it is out in plain view (well buried in a footnote, which is plain view in science speak) in Recursive fury: Conspiracist ideation in the blogosphere in response to research on conspiracist ideation Stephan Lewandowsky, John Cook, Klaus Oberauer and Michael Hubble-Marriott

5.   The authors subsequently obtained a control sample via a professional survey  rm
in the U.S: This representative sample of 1,000 respondents replicated the results
involving conspiracist ideation reported by LOG12 (Lewandowsky et al., 2013).
Lewandowsky, S., Gignac, G. E., & Oberauer, K. (2013). The role of conspiracist ideation and worldviews in predicting rejection of science. Manuscript submitted for
publication.