The bunnies have been asking what's wrong with the rather strange roughly cylindrical brown object that Craig Loehle left lying on the sidewalk at Atmospheric Environment. The constant nattering has woken Eli from his retirement nap so he might as well have a go.
what's too far said he
where you are said she - ee cummings
This is a case, where when you see the trick you want to go punch the referees, decapitate the editor and shoot the author into space with a one way ticket. It starts with a paper by Hofmann, Butler and Tans in the same journal which looked at how fast the CO2 mixing ratio has been growing since ~1800. They find a close correlation between population growth and the mixing ratio, indeed, in 1987 (note added in proof) Newell and Marcus had suggested that monitoring CO2 mixing ratio growth was a reasonable way of monitoring global population.
HBT show that the CO2 mixing ratio has been growing very close to exponentially since ~ 1960 when monitoring at Mauna Loa started, although going back further than that on the basis of ice cores shows slower growth in earlier times. They then try to tie everything together to suggest
Besides showing the insight gained by removing pre-industrial CO2 in explaining the curvature in the Mauna Loa CO2 record, and organizing the confusion on growth rates, does this new analysis of the CO2 record have any further use for the future? One possibility is to use it to watch for the expected and necessary break of the exponential nature of CO2 and its growth rate. Fig. 4 shows an exponential fit to the past 14 years of Mauna Loa anthropogenic CO2 data and the residual between the data and the exponential function. The residual has varied from zero less than 1% over this time period. A sustained downward trend of the residual by more than 1% (in the absence of any known major volcanic event) would be a clear sign of a change in the nature of anthropogenic atmospheric carbon dioxide, and a possible gauge of progress in the inevitable need to limit atmospheric CO2. Similarly, a break in the close relation between anthropogenic CO2 and population would signal that a change in ‘‘business as usual’’ had occurred.For anyone who needs a shorter HBT, they said watch for deviations from exponential growth on short to medium term intervals to spot changes in emissions. They did not predict future emissions but they did discuss the entire record. Let Eli see what Loehle did according to Loehle
A paper by Hofmann et al. (2009, this journal) is critiqued. It is shown that their exponential model for characterizing CO2 trajectories for historical data is not estimated properly. An exponential model is properly estimated and is shown to fit over the entire 51 year period of available data. Further, the entire problem of estimating models for the CO2 historical data is shown to be ill-posed because alternate model forms fit the data equally well. To illustrate this point the past 51 years of CO2 data were analyzed using three different time-dependent models that capture the historical pattern of CO2 increase. All three fit with R2 > 0.98, are visually indistinguishable when overlaid, and match each other during the calibration period with R2 > 0.999. Projecting the models forward to 2100, the exponential model comes quite close to the Intergovernmental Panel on Climate Change (IPCC) best estimate of 836 ppmv. The other two models project values far below the IPCC low estimates. The problem of characterizing historical CO2 levels is thus indeterminate, because multiple models fit the data equally well but forecastWatch the pea. The IPCC estimate is for A PARTICULAR EMISSION SECENARIO, it is not a projection of the last fifty years of measuring CO2 mixing ratios, nor is it the only emission scenario. Fast work there folks. Still, Eli is a trusting bunny so let us go on. The three fitting forms and Loehle's values for the parameters are
very different future trajectories.
- the exponential, a + b exp(ct),
a=259.4, b=2.978 x 10-13, and c= 0.01677 - the quadratic, a + bt + ct2
a= 45 318.5, b= -46.7684, and c = 0.0121
(Gives a really bad fit. a is adjusted to 45504 for a decent fit) - and the saturated, c + a(1-exp[b(t-d)])2
a= 616.694, b= -0.0067, c= 311, and d= 1945.7
but Eli made a somewhat fuller picture, using the Loehle's values (click on the graphs for better views)
The exponential model, although not perfect, does match the CO2 mixing ratio much better at earlier times than 1959, so, contrary to Loehle, the three models are not equally plausible. But there's more. Eli also did his own fit.
The saturating curve here is much closer to the data and the exponential curve. Still, young bunnies, there is still something like junior high school wrong with what Craigie poo did. Flip to the continuation for the answer
------------------------------------------------
Loehle extrapolated all of his fits from the year dot. Little ones can spot this in his value for b in the exponential fit and for a in the quadratic and d in the saturated one. If the time is replaced the three fitting forms by [t-1985] you get much much improved saturated fits. The quadratic fit still sucks at early times (and late ones) but even it comes closer to the other two..
The extrapolations are shown below.
Model | Year 0 Loehle | Year 0 Eli | Year 1985 Eli | |
2000 | Exp | 370 | 369 | 369 |
| Quad | 367 | 367 | 370 |
| Saturated | 368 | 370 | 370 |
2050 | Exp | 513 | 513 | 510 |
| Quad | 479 | 479 | 490 |
| Saturated | 467 | 500 | 512 |
2100 | Exp | 846 | 846 | 819 |
| Quad | 651 | 651 | 668 |
| Saturated | 567 | 729 | 834 |
The referees and the editor missed the slight of hand, they also did not read Hoffman, et al. The graphs demonstrate that quadratic fits are not industrial grade for this problem, and the table and graphs show that Loehle's fits are fishy. Doesn't anyone check the programming? Where are the auditors when you need them.
Eli has placed an Excel spreadsheet at the memory hole for those who want to play around with the numbers.
Thanks Eli,
ReplyDeleteUp late working and saw your post when taking a break. Hope my post at Tamino' didn't come across as pushy, it just occurred me at the time that you'd mentioned taking a look on an earlier thread.
Regarding Craig's analysis. Wow....words fail me.
If I were a brave bunny I'd hope over the CA and ask Stephen McIntyre to do an official audit on Loehle's paper. But I'm a meek and scared lagomorph these days, and the acolytes at CA make my ears twitch, it is really annoying.
MapleLeaf
Nice curves there...
ReplyDeleteWhy do I get a strange sense of deja vu in reading this? Seems I've seen that "fitting morphing into projections" recently somewhere...
Crazy Bill
This comment has been removed by the author.
ReplyDeleteIt's still early here, but I don't understand. There are already t-basing constants (d) in the saturated equation, and in the quadratic one (a and b), and in the exponential one (b). In other words, if you can get a better fit by replacing t with t-1985, then you can get exactly the same better fit by choosing different values of these constants. Can't you?
ReplyDeleteThanks for that Eli, have you alerted the editor of the journal? I think someone ought to!
ReplyDeleteThe quality of review has gone way down. Usually only authors and editors see reviews but for dozens of easily accessible examples of crap reviewing head over to the EGU discussion journals.
ReplyDeleteI plucked this example from the first paper in the first issue of this year’s volume of BGD. Other examples are easy to find.
Reviewer 1
Sure these are the “discussion papers” but the editor makes a decision based on the reviews here to see if the ms will transition over to the full journal.
I blame those European PhD systems where one must publish n papers that are then stapled together as a thesis. This encourages a lot of circle jerking.
Eli, forget my question. I got this working in matlab using lsqcurvefit, and replicated your findings... except for the 'saturated' equation.
ReplyDeleteI find that its solution is extremely poorly conditioned and sensitively dependent on getting the numerics right. For me, it converges -- very very slowly -- to the quadratic solution, not, as Eli found, to the exponential one.
I suspect there may be more than one local optimum for this function, all very very flat, and which one you get depends on your starting point and whether you get there all the way, on not prematurely stopping the iteration.
Nick, I think it is only the saturated equation that has a t shift, and your question is legitimate there. I suspect that the reason for Eli getting a different result here is more related to the above mentioned fucked-up-ness of this function.
The sensitivity of the saturated model was my initial concern, if it closely matches an exponential over the calibration period that would suggest very little constraint on the saturation. It is possible that Loehle's model fit was heavily determined by the initial values of the parameters, which just by chance gave a low projection. My though was to perform a Markov Chain Monte Carlo simulation and see what the posterior distributions looked like. I suspect for the saturated model they are rather broad.
ReplyDeleteDikran,
ReplyDeletethat's precisely it. You need three free parameters to fit the calibration period, and three parameters is sufficient for a smooth curve like this. The fourth parameter is a giveaway that there is a near-singularity somewhere.
Martin, you can slide the quadratic along the t-axis by changing the constant and linear coefficient, thusly:
ReplyDeletea(t-t0)^2 + b(t-t0) + c = at^2 + b't + c', where
b' = b - 2.a.t0
c' = c + a.t0.t0 -b.t0
Similarly, you can slide the exponential along the t-axis by changing the b coefficient:
a + be^(c(t-t0)) = a + b'e^(c.t), where b' = be^(-c.t0)
Nick: yep, but (at least for quadratic, not quite sure about exponential) the conditioning of the problem becomes very different. Not so for saturated where it goes straight into d only.
ReplyDeleteTo Eli: nice work!
ReplyDeleteTo Nick Barnes: yes, you can choose the "reference point" for time (t0) to be anything you want, and adjusting the constants still gives the same fit.
BUT: when you use just "t" (i.e., t0=0) with all t values very near, say, 2000, then terms like t^2 are all near 4000000, and small differences between t^2 values which are important, get lost in the trailing significant digits. So, unless your computer tracks 30 or so significant digits, your calculation of the parameters can be highly inaccurate.
It's often a good idea, when curve-fitting, to transform "t" to "t - mean(t)" (in the lingo of R) so that the mean value of the predictor variable is zero.
To Loehle: A curve-fit is not a model, and the existing carbon-cycle models are not curve fits.
ReplyDeleteOK, so we're looking at the rounding effects. But the biggest difference is in the saturated case, where it's completely unclear to me either what the equation is supposed to represent (what is "saturated" about it) or how Loehle chose his parameters. I note that Loehle's curve is the other way around from Eli's (b values have opposite signs). If we constrain ourselves to negative b, what is the best fit?
ReplyDeleteNick Barnes, I think the point is that there are many different values for the parameters of the saturated model that all give almost equally fits to the data, but which give very different extrapolations. This happens because there is a flat minimum (or perhaps several flat minima) to the cost function used to fit the model.
ReplyDeleteI was also originally suspicious of using the data with the annual cycle left in, as the end-points may have an substantial effect on the extrapolation. Ideally if you are going to write a comment paper, you need to justify any changes you make in the method used in the original paper (and show that the results are robust to your choices).
My paper was about forecasting, not back casting. In econometrics a model forecasting cell phone sales need not back cast to be useful.
ReplyDeleteNot enough digits were carried in the publication to replicate the graphs. It is suggested that a high precision platform such as Mathematica be used with the following parameters: exponential {a=259.3945817870858, b=2.9782774195066923^-13, c=0.016769296042440738}, quadratic {a=45318.539465496644, b=-46.7684328499448,c=0.0121468549016735}, and saturating {a=616.9635436975683, b=0.006760893860992654, c=310.9394987790752, d= 1945.6473702994858`}.
Craig Loehle
Well yeah, if b is positive, the curve is not saturated, in the sense that it becomes constant, but it also curves upwards at earlier times, which we also know did not happen.
ReplyDeleteIn that case(t-d) becomes negative in Loehle's fit at 1945. That might work if you believe Beck.
Loehle:
ReplyDeleteYou call the hopeful extrapolation of a curve fit to be a forecast?
How about considering some emissions scenarios, and the physics of how the carbon cycle will respond?
Maybe unphysical curve fits are allowed in econometrics. I don't know. This isn't econometrics.
Craig, the necessity for so many significant digits (in some case more than are supported by double precision format) seems to me an indication that the problem as posed was ill-conditioned. The fact that making a better conditioned problem by changing the reference time gives a different result suggests that it is your model fit that is unreliable, rather than Eli's?
ReplyDeleteI disagree about the back casting comment, as a statistician what matters is decided by the needs of the application, not the statistician. Just because it doesn't matter for cell phone, doesn't mean it doesn't matter for carbon dioxide forecasts. If one model can give back-casts, but the others can't that immediately means that there is a reason to chose one model over the others, and the paper ought to have pointed out that deficiency of the saturating and polynomial models.
Having now written some MATLAB code, I am not so sure the reference time is the issue. I used the yearly mean Mauna Loa data (as I don't think the annual cycle should be allowed to influence the outcome), and using a reference time of 1985 I get a plot that looks a lot like Craigs.
ReplyDeleteI suspect the problem is flat minimum, so maybe tomorrow I'll find time to try the MCMC approach and look at the uncertainty in the forecast (which I would imagine are important in econometrics as well).
Having said which, I have realised that the back-casting is a red-herring; the reason the data before 1958 are important because if you built a model using the available relevant knowledge (which includes the Law Dome data for instance) the quadratic and saturating models would no longer give the same forecasts, whereas the exponential one would be largely unchanged. I have more trust in a forecast from a model the more observational data it explains, especially if it explains it without re-calibration.
The key point is "would the comment get past the reviewers had the back-casts been plotted?". I suspect the answer is probably "no".
Eli agrees with Dikran at least in the important part The basic point is that forecasts are based either on historical data or models. In this case the forecast was based on historical data. The strong deviation from from the data of two of the fitting forms is equivalent to the 1998 cherry pick we are all familiar with.
ReplyDeleteOn the centering of the fit, not so much. The farther the extrapolation, the more sensitive the fit is to noise in the data, lack of infinite precision, etc. So, saying this awkwardly, a small movement in one parameter can lead to a large on in another if you don't choose to wisely.
Recent growth in CO2 concentrations is hyperexponential. Tamino has a post on this and by looking at the "diffs: column in
ReplyDeleteGlobal Warming, decade by decade
you can see it for lnCO2, decadally.
"259.3945817870858" taken from Loehle's paper. When a person quote many (many) more digits than are significant, it's usually a good indication that he does not know what he's doing re math, curve-fitting, or statistics.
ReplyDeletehere's my code (why no pre tag?):
ReplyDeleteload co2.txt
xdata = co2(:,1) - 1985;
ydata = co2(:,5);
figure(1); hold on;
options = optimset('TolFun', 1.0e-12, 'MaxFunEvals',2000);
x1 = lsqcurvefit(@f, [0 100 0.01], xdata, ydata, [], [], options)
x2 = lsqcurvefit(@g, [1 1 1], xdata, ydata, [], [], options)
x3 = lsqcurvefit(@h, [616.9635436975683 0.006760893860992654 310.9394987790752 1945.6473702994858-1985], xdata, ydata, [], [], options)
x4 = lsqcurvefit(@k, [1 1 1 1], xdata, ydata, [], [], options)
plotspan = [1850:2100];
plotspan2 = plotspan - 1985;
axis([1850 2100 0 600])
plot(xdata + 1985, ydata, 'k', plotspan, f(x1, plotspan2), 'b', plotspan, g(x2, plotspan2), 'g', plotspan, h(x3, plotspan2), '--r', plotspan, k(x4, plotspan2), '--c');
legend('data', 'exp', 'sqr', 'sat', 'cub', 0);
You then need functions like h.m:
function y = h(x,xdata)
e = exp(x(2)*(xdata - x(4)));
y = x(3) + x(1)*(1-e).*(1-e);
...and of course the input data in co2.txt.
Here is my code, which reproduces something like Craigs results, but with annual mean data, and using a reference time of 1985 to reduce numerical problems. It is all self contained, so just copy all into a single file and run. I get the following 0.5 times sum of squared errors:
ReplyDeleteexponential SSE = 13.000800
quadratic SSE = 10.377087
saturating SSE = 8.544788
function analysis
clear all
maunaloa = [1958 315.23 ; 1959 315.98 ; 1960 316.91 ; 1961 317.64
1962 318.45 ; 1963 318.99 ; 1964 319.16 ; 1965 320.04
1966 321.38 ; 1967 322.16 ; 1968 323.05 ; 1969 324.63
1970 325.68 ; 1971 326.32 ; 1972 327.45 ; 1973 329.68
1974 330.25 ; 1975 331.15 ; 1976 332.15 ; 1977 333.90
1978 335.51 ; 1979 336.85 ; 1980 338.69 ; 1981 339.93
1982 341.13 ; 1983 342.78 ; 1984 344.42 ; 1985 345.90
1986 347.15 ; 1987 348.93 ; 1988 351.48 ; 1989 352.91
1990 354.19 ; 1991 355.59 ; 1992 356.37 ; 1993 357.04
1994 358.89 ; 1995 360.88 ; 1996 362.64 ; 1997 363.76
1998 366.63 ; 1999 368.31 ; 2000 369.48 ; 2001 371.02
2002 373.10 ; 2003 375.64 ; 2004 377.38 ; 2005 379.67
2006 381.84 ; 2007 383.55 ; 2008 385.34];
opts = optimset('Display', 'Iter', ...
'GradObj', 'off', ...
'TolX', 1e-9, ...
'TolFun', 1e-15, ...
'MaxFunEvals', 1e+6);
[xq,Lq] = fminunc(@quadratic,[285 5 0.1],opts,maunaloa(:,1)-1985,maunaloa(:,2));
[xs,Ls] = fminunc(@saturating,[10 0.1 285 10],opts,maunaloa(:,1)-1985,maunaloa(:,2));
[xe,Le] = fminunc(@exponential,[270 70 0.01],opts,maunaloa(:,1)-1985,maunaloa(:,2));
period = 1850:2100;
saturatingfn = inline('x(3) + x(1)*(1-exp(x(2)*(t-x(4)))).^2', 'x', 't');
plot(maunaloa(:,1), maunaloa(:,2), 'r-', ...
period, xe(1)+xe(2)*exp(xe(3).*(period-1985)), 'b-', ...
period, xq(1)+xq(2)*(period-1985)+xq(3)*(period-1985).^2, 'g-', ...
period, saturatingfn(xs, period-1985), 'm-');
xlabel('Year');
ylabel('CO_2 (ppmv)');
legend('Mauna Loa','exponential','quadratic','saturating','Location','North');
fprintf(1, 'exponential SSE = %f\n', Le);
fprintf(1, ' quadratic SSE = %f\n', Lq);
fprintf(1, 'saturating SSE = %f\n', Ls);
function L = exponential(x, t, y)
a = x(1);
b = x(2);
c = x(3);
expct = exp(c*t);
f = a + b*expct;
L = 0.5*sum((y-f).^2);
function L = quadratic(x, t, y)
a = x(1);
b = x(2);
c = x(3);
f = a + b*t + c*(t.^2);
L = 0.5*sum((y-f).^2);
function L = saturating(x, t, y)
a = x(1);
b = x(2);
c = x(3);
d = x(4);
f = c + a*(1-exp(b*(t-d))).^2;
L = 0.5*sum((y-f).^2);
Dikran,
ReplyDeleteI agree. I find it utterly bizarre that Craig's formulae apparently require astronomical numbers of sig fig to produce results that need only have only 4/5 sig fig resolution. Accuracy is another matter, and seems to be poor in any case.
Craig,
How did you arrive at such precise numbers?
All this curve fitting reminds Horatio more than a little of running on a hamster wheel.
ReplyDeleteConsider what has happened to the growth in (yearly) emissions from the 1990's to 2000's decribed here
"Worldwide growth in carbon dioxide emissions has doubled since the close of the 1990s, reports a study published in the early on-line edition of the Proceedings of the National Academy of Sciences."
"A team of researchers led by Michael R. Raupach of CSIRO-Australia found that global growth rate of CO2 increased from 1.1 % per year during the 1990s to 3.1% per year in the early 2000s. The growth is being fueled primarily by developing countries, especially China and India, where economies are fast-expanding and population continues to increase at a significantly higher rate than in industrialized nations."
There are many many many things that can affect the actual emissions path that is taken, so many, in fact, that it is virtually impossible to say with any confidence what the total emissions -- and resulting increase in atmospheric CO2 concentration -- will be by 2100.
That is precisely why the IPCC gives many different scenarios. As Eli indicates with his "watch the pea" comment, they are simply not in the business of "predicting" future emissions, nor should they be.
For a particular scenario, they make an assumption about what will happen over the next century to yearly emissions (eg "They will grow by 2% per year" or whatever) and then calculate, based on the details of the climate system (eg how much CO2 is absorbed by oceans and other natural sinks, etc) how much that will increase the atmospheric CO2 concentration.
Attempting to actually "forecast" future emissions (especially based on simple "curve fitting") is basically a hamster's game. It tires you out but really gets you nowhere (Horatio knows about the latter)
Having said that, Horatio has run the numbers and thinks it should be
"a=259.3945817870856"
(with an uncertainty of 1 in the last digit)
The Atmospheric Environment link at the top of the post is missing something...
ReplyDeleteOn another note, I like Craig's backhanded dismissal of Matlab and R, as apparently only Mathematica is the "high precision platform" capable of curve-fitting.
Joe said "I like Craig's backhanded dismissal of Matlab and R"
ReplyDeleteHang on, McIntyre likes and uses R.
Now if I were Jonathan Leake, then tomorrow's headline might read something like this:
"Craig Loehle throws McIntyre under the bus"
In a startling revelation yesterday, Craig Loehle said that Stephen McIntyre uses inferior statistical software which is not suitable for conducting precise statistical analyses. This now calls into question the validity of all of Mr. McIntyre's audits. When approached for comment, Mr. McIntyre muttered something about whitewash and the Hockey Stick being a fraud before breaking down in tears. After composing himself, Mr. McIntyre said that Loehle has now been banned form his blog."
It really is a nice day today :)
MapleLeaf
Eli, of course, prefers Maple
ReplyDeleteVery clever Eli :)
ReplyDeleteML
Dikran also has the MATLAB symbolic maths toolbox (maple) ;o)
ReplyDeleteHoratio prefers Wii
ReplyDeleteBeing extremely suspicious of Craig's huge numbers of sig fig, I thought I'd try a simple test using the exponential formula.
ReplyDeleteI couldn't use the the full number of sig fig in my spreadsheet. I lost 1 from a, 2 from b, and 2 from c. But anyway, the fit 1958-2008 was not particularly good. I got
Total squares error 59.93
1958 value 313.56 (actual 315.34)
2008 value 384.67 (actual 385.59)
2100 value 845.36
I then removed a further 3 sig fig from each parameter. The results were
Total squares error 59.93
1958 value 313.56 (actual 315.34)
2008 value 384.67 (actual 385.59)
2100 value 845.36
OK, let's round by a further 3.
Total squares error 59.93
1958 value 313.56 (actual 315.34)
2008 value 384.67 (actual 385.59)
2100 value 845.36
And 3 more again.
Total squares error 64.37
1958 value 313.53 (actual 315.34)
2008 value 384.59 (actual 385.59)
2100 value 844.98.
So, only now do we see any difference. In fact, if I add 2 sig back in, the result is
Total squares error 59.87
1958 value 313.56 (actual 315.34)
2008 value 384.67 (actual 385.59)
2100 value 845.36
At this point,
a=259.39458
b=2.9782774E-13
c=1.676930E-02
IOW, you only see any difference when using 8 instead of 9 digits. This suggests that Craig's use of such "precision" makes no sense and that it's a smokescreen for a poor analysis.
Anyrabett who teaches an introductory science class sees this frequently. It is a sure sign of someone who is unclear on the concept.
ReplyDeleteOTOH, as Tamino pointed out above, many fitting procedures (OLS) require subtracting numbers from each other where there is a small difference between them, and the procedure requires many significant figures. Experience is knowing when to cut and when to paste.
So, Eli, are you saying that I'm unclear or that Craig is? I just played with the numbers to see what happened.
ReplyDeleteBTW is there any easy way to answer a specific comment here, and why does the blog not show time *and date* for each comment?
Craig
ReplyDeleteWRT the date, True, done. Rabett Run is a full service blog. YMMV on what it serves.
Thanks, even if it's that weird US date format. ;)
ReplyDeleteFull service:)
ReplyDeletecan i have a pony?
ReplyDeletePatrick Cottontail says...
ReplyDeleteI would like a pony, and my very own Lindsay Lohan.
I would prefer Lindsay Lohan. Guess I'm old-fashioned, just don't want to think what some folks want ponies for :-(
ReplyDelete