Rotten to the Core
As bunnies may recall, yesterday Eli asked what the problem with the new paper in Climate of the Past, Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records by H.-L. Lüdecke, A. Hempelmann, and C. O. Weiss was. Eli has been very very disappointed that many of his readers had better things to do with their weekend than solve Rabett Run puzzlers, but yes, Andreas wins the prize.
The Lüdecke paper is rotten to the core. While some of the comments at Rabett Run (Chris
The Ludecke analysis is rubbish - the "projection of future NH temperatures mainly due to the ~ 65-yr periodicity" is the sort of tosh that no competent scientist would perpetrate, but which seems to be allowed in the nether regions of climate science, perhaps because editors are a little wary of calling a shovel a shovel in a rather contentious field.and Nick Stokes) pointed out that the mathturbation was astounding, and Eli would not be surprised to see Tamino jump in, the DATA sucked, but it sucked for an interesting reason. Turns out that instrumental measurements before, say 1860 or so, tended not to be taken in shelters, but out in the open, with a variety of methods used. This produces a well known warm bias in these measurements. The latest discussion of this can be found in a paper which Andreas pointed to, The early instrumental warm bias: a solution for long central European temperature series 1760-2007 by R. Boehm, P.D. Jones, J. Hiebl, D. Frank, . Brunetti and M. Maugeri.
I'm curious to know the siting of these temperature instruments in Prague, Vienna, Paris, Munich etc. Otherwise it seems somewhat dubious to use only the Kremsmunster data from the large set of temperature series that constitute the HISTALB Greater Alpine Series (GAR) of Auer et al. (2007) [Int. J. Climatology 27, 17-46]. Auer’s composite GAR temperature series (see Figure 12 of Auer et al (2007) ) looks pretty close Eli’s composite of the Best Europe/Austria series.
…oh well. Take any old time series. Fourier transfrom it to pull out the frequency components and their amplitudes. Select the dominant frequency components and reconstruct a smoothed time series from these. You’re going to get something that matches your original series, irrespective of whether the system has intrinsic periodicity or not. What have you learned? Not much.
Instrumental temperature recording in the Greater Alpine Region (GAR) began in the year 1760. Prior to the 1850–1870 period, after which screens of different types protected the instruments, thermometers were insufficiently sheltered from direct sunlight so were normally placed on north-facing walls or windows. It is likely that temperatures recorded in the summer half of the year were biased warm and those in the winter half biased cold, with the summer effect dominating. Because the changeover to screens often occurred at similar times, often coincident with the formation of National Meteorological Services (NMSs) in the GAR, it has been difficult to determine the scale of the problem, as all neighbour sites were likely to be similarly affected. This paper uses simultaneous measurements taken for eight recent years at the old and modern site at Kremsmünster, Austria to assess the issue.In addition to the shelter issue, the orientation of the thermometer, the height off the ground and more have to be taken into consideration. Fortunately, enough information about the early measurements at Kremsmünster were available to allow a standardization and to spread that to other stations in the HISTALP network. Below is an updated figure showing the corrected data from Vienna (ten year smoothing) together with the BEST reconstructions for Austria and the "Lüdeckerous" Fourier fit.
Unfortunately Eli does not have the data from the 2007 reconstruction that Lüdecke used, but he does have the figure that they published showing a much higher relative anomaly at earlier times which can be compared. The siting effects average about 0.4 C in the summer with little effect in the winter.
Four of the six stations that Lüdecke et al used are in the HISTALP network, Vienna, Munich, , Hohenpeißenberg, and Kremsmuenster. The other two, Paris and Prague suffer from the same ills.
What is interesting is that the BEST reconstruction appears to handle this problem. Going forward a combination of the BEST method and metadata adjustments may be superior to either method alone. In any case there is good reason to hope that we now have tools to handle inhomogenity in early instrumental climate records.
With this in hand, it should be possible to improve various multiproxy reconstructions using longer instrumental data bases for training and testing the reconstructions.
Georg Hoffman at PrimaKlima discussed these problems and others a day earlier. He also printed a letter one of the referees Manfred Mudelsee, sent to the editors at Climate of the Past
” I am less pleased that this piece has been published in CP since I believe that (even in its revised version) it has serious technical flaws. I had appreciated if the handling editor had considered more seriously my technical comments on CPD. Finally, I had appreciated if I had been informed/shown the revised version sent to CP. Unrelated to the technical flaws, one may speculate about (I exaggerate for clarity) the hijacking of CP for promoting ‘skeptical’ climate views.
I would appreciate if you took me out of your database of CP(D) reviewers.”Somewhat earlier, in a similar case involving Atmospheric Chemistry and Physics, Eli had noted that the open review process requires that editors pay more attention to their reviewers comments
He is not particularly pleased that the bunnies are coming home to roost.For those of us who favor the open review system, this will be a disaster. The predictable outcome is that people are going to cite this example as a reason to throw ACP invitations to review into the trash pit. Open review required that the referees put their reputations on the line. Their reviews are out there for everyone to read. If the editors ignore them, why do so?
If the audience would allow a brief digression, Eli knew nothing about the siting issue before reading the Lüdecke paper, but he had seen enough proxy temperature reconstructions and instrumental data sets to know that the upturn in the Lüdecke data set was not very likely. Using the BEST data set and cross checking with various spaghetti graphs, it became obvious that something was very wrong besides the mathterbation. The Rabett then read the interactive discussion and was curious to note that there was no discussion of the sheltering issue. Some searching, writing emails to others etc. followed to a reasonable conclusion.
In doing his due diligence Eli noted that the editor in charge of this paper was Eduardo Zorita. Now some bunnies may wonder why that is an issue. No, not that. It turns out that Zorita is first author on a paper European temperature records of the past five centuries based on documentary information compared to climate simulations. Climatic Change. (special edition of the Millennium-project)submitted (2008-12) as part of the HISTALP project. How he let this paper through, over ruling the referees he selected, is hard to understand. Why he did not choose a referee who was involved in the HISTALP data base homogenization is even harder to understand without some really hard and potentially nasty thoughts.
To avoid these thought, Eli will offer Willard Tony a station picture from a complete description of the homogenization procedures
The arrow points to where the old and reference measurements at Kremsmünster, were made.
14 comments:
As one of those who submitted a comments (arguing exclusively on physical grounds, as I am not savvy enough to judge the stats involved), I thought it should be sufficient to indicate that there might be a serious problem with the selection of the 6 sites. Not to mention that there are also well known reasons for the "wiggles". I didn't exactly bother to dive into the question as to why there is such a discrepancy between BEST and their choice. It was clear to all of us, that there is absolutely no merit in this paper. Not even a tiny notion of sth worthwhile being published. Makarieva, Beenstock, Luedecke ... things doesn't seem to bode well for open review. It was certainly my last time I made this effort.
Like you, Eli, I also keep my mouth shut as to what role Eduardo Zorita might have played in this amusing episode. As Georg Hoffmann has noted: "He is contended with the way the peer review process has worked in this case". Quite a pity that I so utterly disagree with him on that. Thanks to your appreciated diligence, the fact that he should have been aware of the early warming bias makes things look even more "strange". But what do we know ...
Eli:
Good due diligence always gladdens my heart.
KarSteN:
Hmm, did you mean "is contented"?
(I first read this as Zorita had contended with the peer review process...)
Eduardo soweit zufrieden mit dem Reviewprozess und dem Resultat ist.
Which means translated something like: Eduardo is satisfied with the review process and the results.
I am not. This paper should never have been published. As many of the flaws were mentioned in the review, I feel that it is also possible to say that this paper would never have been published had it been submitted by a scientist.
Thanks Victor. Should have used the words I'm familiar with ;-). Sorry for that John!
It's not the first time strange things have cropped up in CPD. Remember Asten Estimate of climate sensitivity from carbonate microfossils dated near the Eocene-Oligocene global cooling last year?
Willard Tony somehow picked up on that
Although on that occasion the criticism was justifiably harsh, and the editor wasn't buying in the end (see summary by Yves Godderis, 03 Jan 2013).
BBD, and of course Willard Tony never updated to point out that the paper was not published in the end.
> the BEST reconstruction appears to
> handle this problem. ...
> a combination of the BEST method
> and metadata adjustments may be
> superior to either method alone.
May I suggest some climate blogger consider inviting Robert Rohde to do or share a guestblog spot? I've been a fan since I first found Globalwarmingart (I don't know if he started Globawarmingart or his PhD work first).
And Tamino weighs in:
http://tamino.wordpress.com/2013/02/25/ludeckerous/
As I figured, he is not impressed.
The paper is even stupider than you think.
Making a decent fit using a small number of Fourier components would possibly be significant.
Statements about how it should be easy to create a good fit to an arbitrary large sample sequence using a few Fourier components are wrong.
The reason the paper is nearly an arithmetic tautology is that after applying the moving average filter, there just aren't that many significant Fourier components left.
The algorithm they used was:
1) Take a set of samples.
2) Apply a low pass filter (15 year moving average) to the samples which suppresses all but a small number of lower frequencies in the Fourier transform.
3) Take the Fourier transform.
4) Take the first few Fourier transform components and reconstruct the samples (take the inverse transform).
Since the higher frequency components have been suppressed, the lower frequency components approximate the filtered samples.
There are 254 samples in the paper with 1 sample per year. The Fourier component frequencies are:
0/254 cycles per year (the DC component, i.e. the average of the samples)
1/254 cycles per year
2/254 ... 127/254 cycles per year
A moving average filter of length L applied to a sine wave of frequency f will result in a gain of:
f = 0/254, gain = 1.0
f = 1/254, gain = 0.99
f = 2/254, gain = 0.98
...
f = 7/254, gain = 0.74
f = 8/256, gain = 0.67
...
f = 10/254, gain = 0.53
...
f = 15/254, gain = 0.13
...
f = 20/254, gain = 0.07
and so on.
The formula is
gain = sin(pi*f*L)/(L*sin(pi*f))
The paper used the first 7 components. Big surprise that they approximate the filtered samples.
Depending on the conclusion, curve fitting might seem less appealing to contrarians:
> [I]t’s possible to analyze multidecadal HadCRUT3 (defined as F3(HadCRUT3)) as a sum of two naturally arising functions, SAW and AGW, to within millikelvins, where SAW has 6 parameters and AGW has 3 that I’m allowing myself to tune to improve the fit. 9 is not all that many compared to GCMs. I venture that it’s very hard to find any other 9-parameter analytic formula with as good a fit.
http://judithcurry.com/2012/12/04/multidecadal-climate-to-within-a-millikelvin/#comment-274617
I would not find a 65 year component in this data at all unusual.
Consider that 1) The sun has an 11 year cycle. 2) The ENSO is roughly about 6 years, although highly variable.
So, if we were to expect those two cycle to produce an interference pattern that would occur over about 65 years or so.
Finding evidence of interference between two well known patterns is nothing new. It might be worthy of a middle school science fair project, but nothing to undermine the bedrock of climate science.
Nothing to see here folks, move along.
BBHY
I am not qualified to comment on European temperature data series.
I applied the DFT to 110 years of rainfall data for Cambodia and got similar results to those of Lüdecke, Hempelmann, and Weissof for periods of 60 years and below. I agree the CRU series is too short to include the peak at or about 60 years
I was prompted to use Fourier analysis because rainfall in Cambodia appears to be quasi-periodic. And references appear in the literature to teleconnections among oceanic oscillations of different quasi-periodicity, specifically the PDO and ENSO.
I applied Fourier analysis mainly from curiosity and then stumbled upon multiple critiques of this paper. Thanks to the critics, I am more aware of the pitfalls in both applying DFT and interpreeing the results.
I have spent more time studying the critiques of the paper than the paper itself, and have come to wonder if the main thrust of the critics is to deprecate the use Fourier analysis for climatology.
One thing I have noticed in the critiques is a focus on technique in applying the DFT to the European datasets. By contrast, I have found virtually no discussion of the physical phenomena except by the authors.
Those of us who approach DFT from Earth science and oceanography ask the question: Do oceanic oscillations dominate natural climate variability?
Whether or not Fourier analysis could usefully be applied to a system of oscillations with yearly periods of 60, 30, 15, 7.5 and fewer years (QBO) seems to me to be a technical issue. Perhaps the DFT technique is not suitable for physical systems that generate harmonics.
As for prediction, I thought that making predictions was a way of testing a theory. At least that was what Albert Einstein thought. Is there some other way?
I will continue working with rainfall data for Asia and the Pacific and have downloaded all the critiques of this paper to ensure I follow best practice in applying the DFT.
Frank,
As a first point Fourier analysis eliminates trends. Moreover you have to be very careful about filtering effects. There is a bunch of commentary on such things wrt Luedekke and links to more at
https://moyhu.blogspot.com/2013/05/climate-of-past-fails-fourier-test.html
You might take a look and see how much of it applies to what you did first.
Great blog,
The information about the measurement flaws and fourier flaws at Ludecke, H.J, Hempelmann, A., Weiss, C.O. from 1750 to 1850 are great.
The Fourier analysis should be repeated without the technical flaws in my opinion although I do not know what all the technical flaws are.
Dr. D.E. Koelle also predicts natural cycles. On his website he is doing quite the same as the paper as Ludecke, H.J, Hempelmann, A., Weiss, C.O. In this link: http://www.kaltesonne.de/klima-zyklen-und-ihre-extrapolation-in-die-zukunft/ he is making extrapolations about the temperature on Earth. I wonder how accurate his predictons are about the temperature and the natural cycles in his graphics? Maybe somebody can comment on that.
As far as I know the amplitudes of the natural cycles are: the 65 year AMO/PDO is approximately 0.15 degrees Celsius, the 200 year de Vries/Suess cycle approximately 0.4 degrees Celsius and the 1000 year Eddy about 0.4 degrees Celsius. The source of this information are 25 NASA scientist. Interet link: https://www.youtube.com/watch?v=EhW-B2udhQw Can somebody tell confirm the amplitudes of these values and how are they are obtain?
As far as I know more natural cycles do occur here on Earth. These are: ~130 million years (Svensmark), ~100,000 years (Milanković), ~2,300 years (Halstat), ~1,000 years (Eddy), ~200 years (de Vries/Suess), ~ 90 years (Gleisberg), and ~ 65 years (Atlantic Multidecadal Oscillation (AMO)/Pacific Decadal Oscillation (PDO)). As far as I know all natural cyclus occur at the same time. I wonder if somebody knows what the amplitude of these cycles in degrees Celsius are? Are there some publication about this somewhere?
I found out that adiabatic autocompression is responsible for the so called 33 degrees Celsius on the surface temperature on Earth and not the greenhouse effect. It all can be calculated with the ideal gas law for planetary bodies with a thick atmospher greater then 0.1 bar. At 0.1 bar at about 19542.178 m we can calculate with the ideal gas law approximately 220 Kelvin and at 0.5 bar at about 5960.4234 m we can can calculate 255 Kelvin. It seems to me the greenhouse effect or feedback factor in is invalid. I would like to hear the opinion about others about that here?
In the context of the ideal gas law CO2 and CH4 is no different then any other gas and only in a real greenhouse with glass heat can be trapped. The temperature rise is due to a pressure rise which is caused by gravity. Please comment.
Werner de Vries
Post a Comment