A keystone to modern climatology has been Hansen and Lebedeff's observation of strong correlation between pairs of weather stations at far remove. Based on this, they constructed global temperature measurements that covered the entire globe. It is worth looking at the original data, although you will have to click on the image.
The interesting point is that the correlations are much higher for high latitudes than they are in the tropics. At mid and high latitudes the correlation was attributed to large scale eddy mixing. They picked a 1200 km radius as the distance at which correlation was at least 0.5 at middle and high latitudes and 0.33 at low ones and used this correlation to construct their first global temperature record.
Inherent to this is the thought that the correlation can vary from season to season. Now Eli has lost a reference to subsequent work that shows major differences between summer and winter, but he was motivated to look again by a string of comments at Rank Exploits that essentially asked where does this come from.
While not able to find the paper that started (some years ago) this train of thoughts, he did find one by Mark New, Mike Hulme and Phil Jones which nicely illustrates the principle. Bunnies who click on the graph to the left will observe that the correlation for temperature (the dots) is highest in winter and lowest in summer. The dashes are the correlation for diurnal temperature range and the solid line the correlation for precipitation. In winter the temperature correlation is well over 1500 km, and in summer it drops, depending on latitude to between 1200 and 800 km, so perhaps the GISSTEMP 1200 is not such a good limit in the summer at low latitudes.
However, this correlation IS encouraging for polar regions, where we see that the 1200 km range is a good solid estimate all year long between latitudes 60 and 90 N. One may assume that the same could, maybe even should, hold true for the southern polar regions.
It is even possible that modern reconstructions should revisit this issue with the goal of improvement. Using latitude and season dependent temperature correlation distances might be of value.
James Hansen (Storms of My Grandchildren) mentions the spatial correlation of temperature measurements. The practical significance is that in order to obtain the average temperature of the Earth, you don't have to measure temperatures on a grid with very-closely-spaced points. The grid points can be spaced as far apart as the correlation distance, and you'll get the same global average temperature.
ReplyDeleteOne more issue that many of the deniers don't understand.
there once was a lady,
ReplyDeletewho could tell and not just maybe,
what the weather would be in an hour or three,
with a frightning accuracy
she just looked to skies,
and the movements of flies
making notes of her itch and math'matical wits
'And that, mylord, is why she's a witch!'
The simple distance-based correlation can be viewed as an approximation to a whole bunch of related interpolation/kriging/data assimilation techniques.
ReplyDeleteIn the tropics, long-distance correlation of seasonal (usually 3 or 4 month) averages is often large (e.g. ENSO signals), but monthly statistics are not good in the respect. Variability is large in intra-seasonal time scale (15- to 90- day periods if considered as oscillations), including the Madden-Julian oscillation which typically has 30-60 day periods. If we have data of finer time resolution than monthly, we can extract signals of the intra-seasonal time scale, and have better spatial correlation, as demonstrated by Sanga N.-K. (then a Ph.D. candidate in Kyoto University, now a professor in Ritsumeikan Asia Pacific University).
ReplyDeletePart 1: http://www.journalarchive.jst.go.jp/english/jnlabstract_en.php?cdjournal=jmsj1965&cdvol=64&noissue=3&startpage=391
Part 2: http://www.journalarchive.jst.go.jp/english/jnlabstract_en.php?cdjournal=jmsj1965&cdvol=66&noissue=5&startpage=709
It is not always easy to have data with higher time resolution than monthly, though.
You could have fun doing correlation plots with GCM data. In fact I have a strong feeling that I did play with that. But then you end up finding teleconnections.
ReplyDeletePoints taken, however, this raises the interesting question of whether the teleconnections and variation in correlation from GCMs matches observation. This might be a very sensitive test of the fluid dynamics in GCMs.
ReplyDeleteIt gets lost on some that the weighing factor in GISS decreases linearly with distance, so if you have a bunch of stations closer in, the ones 1200 km away don't really have any influence anyway. So it's only sparse parts of the earth where the 1200 km matters, and the poles are that.
ReplyDeleteThe GISS method could bear refining. But there's something to be said for a simple method, if it gives results which turn out to be consistent with something more sophisticated.
True CE, but the point is that at the poles the extrapolation could be extended even further. Obviously this is most useful where the data is sparsest in both place and time.
ReplyDeleteA study of the Arctic temperatures 1979-1997 (see http://iabp.apl.washington.edu/data_satemp.html) found a good correlation for about 1000 km in autumn, winter and spring, but only around 300 km in summer.
ReplyDeleteTry this again.
ReplyDeleteWhat Hansen's analysis method does is somewhat in between two methods from fairly early in the days of meteorological data assimilation. I was going to be writing in this direction for a series I was starting last fall. Only the first one -- on the drop in a bucket (HadCRU method) -- is actually out. This is the simplest, and doesn't try to fill in all areas of the earth. Unusable for a numerical weather prediction model, so other methods were developed.
First, and next simplest, was Cressman smoothing, which was developed by Cressman in the late 1950s and early 1960s. The idea is to give each observation a weight, and send that weight to zero with distance from the observation point. You're pretty free, however, to decide your own weight(distance) function in this. Makes it simpler, but unaesthetic. Hansen's is more involved than this.
The methods used/developed next owe much to Lev Gandin, whose 1965 book (Objective analysis of meteorological fields) is canonical. This is optimal interpolation (OI). Basically it relies on the fact that observations at nearby (and not so nearby) points are correlated with each other. One can establish criteria of goodness and prove that the resulting analysis is optimal with respect to them. Purely statistical. As I recall Hansen's method, his is somewhat simpler than a full OI. I think of it (perhaps inaccurately at this point) as being midway between Cressman and OI.
If you get in to more gruesome details of analysis/assimilation, you realize that we know more than just statistics of the observations. For instance, we know that the atmosphere believes in the laws of conservation of mass, momentum, and energy. Consequently, you'll get a better analysis (say of what the surface temperatures are) if you also ensure that the analysis respects those laws. This is where you go off in to 2dvar, 3dvar, and 4dvar.
One can also get more gory on the statistical methods and pursue Kalman filtering, which will be more optimal than OI, in the sense that OI requires certain information as an input and Kalman filters will produce that information (or at least related information) as well as the analysis.
And then you can head for the grandest mess of all -- try to hybridize 4dvar with Kalman filtering. Such is being pursued in modern numerical weather prediction centers.
One learns, which is the point of all this. So the question is would climate temperature models benefit by a more gruesome method?
ReplyDeletegetting levels of water vapour and the 3D-direction of air movement correct on a given grid point would be most important, i'd guess as it's the greenhouse gas (just not a modulator of the T). one aspect of modelling we talked in the late 1990s was how to incorporate mountainous rain patterns to a model having large cells, my suggestion was just to include a directional blocking constant for water vapour to each cell having sufficiently steep mountains, but I don't know how they ultimately solved that one. Another thing that came to mind from those times is the spring bloom introducing a huge amount of pollen for rain condensation nuclei, but this isn't something easily mocked up.
ReplyDeleteEli: Well, to continue with my not terribly humble survey of options ...
ReplyDeleteIn some senses, the more seriously gory methods have been in use for over a decade on global temperatures. These are the so called 'reanalysis' efforts. The first set were to take numerical weather prediction models and analysis systems of the mid-1990s, freeze them, and then run them over a 'long' period. This includes the NCEP/NCAR reanalysis (1948-present) and the ECMWF's 'ERA 40' (1958-present, if I remember correctly). This gives you a regularly gridded analysis with a fixed, and fairly gory, analysis method. The problem with them is, as you run farther back in time, you run out of data from which to make your analysis. ECMWF didn't trust as much of the period as NCEP/NCAR did, so their span is a decade or so shorter. There are other such reanalyses out, and there are still others which focused on only regional reanalysis (c.f. Mesinger et al., 2006)
But those are strictly weather model assimilations. As I try to encourage thinking -- there's more to climate than the atmosphere. Consequently there has been (not because I encourage such thinking :-), but because I'm not the only person who does) a Climate forecast system reanalysis -- including ocean and sea ice components in the analysis system. See
The NCEP Climate Forecast System Reanalysis by Saha et al, 2010 for an example. But this was limited to the 'satellite era' -- 1979 to present.
So the longest reanalysis system gives you just over 60 years of data to work with. Long enough to ponder climate, but not nearly as long as you could have by using just the surface stations themselves.
Of course there's still a lot of room to mine techniques that would require only the surface stations. Alexey Kaplan, for one of several, has done some work in that vein for the sea surface temperature record. It'd actually be easier to do over land, given the better data density and distribution.
Eli:
ReplyDeleteer ... bottom line: You get pretty much the same answers for global 2 meter air temperature trends from the massive reanalysis efforts as from the usual surface station-only analyses, same as the different surface station analyses give pretty much the same answers as each other.