Unlike many other aspects of climate-science (hockey-stick with principal components, regressions, etc.), the global-temperature anomaly, involving nothing more than simple averaging, is something that bunnies can explain to non-technical friends/relatives in a way that they actually *get it* (at least to some extent).
This seems to be the Watts-skeptic crowd's biggest vulnerability -- they've gone out on a limb by being consistently wrong about stuff that I can explain to high-school/junior-college-educated folks...
FAQ:
###### 1. How we can know what the earth's temperature is ########
Basically, you average together a whole bunch of measurements over the Earth's surface. But it should be made clear that the NASA/NOAA/CRU are more interested in quantifying how average surface temperatures have *changed* than in just calculating the "Earth's temperature". Look at the Y-axis of one of the NASA global-temperature plots. What you will see is a Y-axis range of on the order of -1 deg C to +1 deg C. Obviously, the Earth's temperature does not lie between -1 and +1 degrees. What the NASA plots show is how the average of the temperature measurements taken by GHCN stations has *changed* over time.
It should be emphasized that we aren't interested so much in calculating the Earth's absolute temperature as we are in estimating how it has *changed* over time. Here is a basic summary as to how to compute average temperature changes from GHCN data.
1) For each station and month, calculate the average temperature over the 1951-1980 period. i.e. for each station, average all the Jan temps over the 1951-1980 period to produce the January baseline average temp for that station. Do the same for the Feb, Mar, ... etc. temps. For any given month, stations with insufficient data to compute 1951-1980 baselines are thrown out (that will still leave you with many thousands of stations for every month of the year). What constitutes "sufficient data" is a judgement call -- I require at least 15 out of 30 years in the baseline period. NASA requires 20 (IIRC). Results are very insensitive to this (10, 15, 20, etc. years all produce very similar results). You will end up with a 2-d array of baseline average temperatures, indexed by GHCN station number and month.
2) For every temperature station, subtract that station's Jan baseline average from the Jan temperatures for all years (1880-present). Do the same for Feb, Mar, Apr, etc. These are the station monthly temperature *anomalies*. "Anomaly" is just a $10 word for the difference between a station's monthly temperature for any given year and that station's baseline average temperature for that month.
3) The crudest, most-dumbed-down procedure to compute global-average temperature anomalies is simply to average together the monthly anomalies for all stations for each year. This is a very crude technique that will still give you not-too-bad "ballpark" global average estimates.
4) The problem with (3) is that stations are not evenly distributed around the globe. (If stations were uniformly spaced all over the Earth, the method described in 3 would be ideal). With method (3), regions with dense station coverage (like the continental USA) would be over-weighted in the global average, while less densely-sampled regions (like the Amazon region, Antarctica, etc.) would be under-weighted.
To get around this problem, we divide up the Earth's surface into grid-cells. Then we compute the average temperature anomalies by applying (3) just to stations in each grid cell to produce a single average anomaly value per month/year for each grid-cell. Temperature values for all stations in each grid-cell get merged into a single average value for that grid-cell. Then we just average all the grid-cell values together for each year to produce the global average temperature anomalies. Note:
The surface areas of fixed lat/long grid-cells change with latitude, so you will need to scale your results by the grid-cell areas to avoid over-weighting high-latitude temperature stations. An alternate approach is to adjust the grid-cell longitude dimensions as you go N/S from the Equator to ensure that the grid-cell areas are approximately equal. The grid-cell areas won't be identical (because your grid-cell longitude sizes are limited to integer fractions of 360 degrees), but they'll be close enough to identical to give very good results. (i.e. good enough for "blog-science" work).
5) The gridding/averaging procedure in its most rudimentary, stripped-down form is quite simple But it still produces surprisingly good global-average results. The one pitfall of the above method is that if you use the NOAA/CRU standard 5deg x 5deg grid-cell size, you will end up with many more "empty" grid-cells in the Southern Hemisphere than in the Northern Hemisphere. This will cause the NH (where there has been more warming) to be over-weighted relative to the SH (where there has been less).
So if you don't compute interpolated values for the empty grid-cells, your warming estimates will be too high (i.e. higher than the NASA results). But you divide up the Earth into 20 deg x 20 deg (or so) grid-cells, you will get results amazingly close to the official NASA results without having to bother computing interpolated values for "empty" grid-cells (because you made the grid-cells big enough that none of them will be empty). It's a crude short-cut, but it's a lot less work than doing all the interpolations, and it still gives you pretty darned-good global-average results.
######## 2 What sort of games Watts has been playing ###############
The big problem with the Watts approach is that he and his followers have not bothered to perform any global-average temperature calculations to test the claims they've been making (i.e. claims about UHI, homogenization, "dropped stations", etc.)
IOW, they have failed to perform any serious data analysis work to back up their claims. Had they done so, they would have found what I have found, namely:
1) Raw and adjusted/homogenized station data produce very similar global average results.
2) Rural and urban station data produce nearly identical global-average results.
3) The "dropped stations" issue pushed by Watts is a complete non-issue. Compare results computed with all stations vs. just the stations still actively reporting data (separating them out is a very easy programming exercise), and you will get nearly identical results for all stations vs. just the stations that haven't been "dropped".
4) The GHCN temperature network is incredibly oversampled -- i.e. you can reproduce the warming trends computed by NASA/NOAA with raw data taken from just a **few dozen** stations scattered around the world. I was able to replicate the NASA long-term warming trend results very closely by crunching *raw* temperature data from as few as *32* rural stations scattered around the world.
####### 3 How real skeptics can beat him by reducing the data themselves ######
The best approach is to roll up their sleeves and compute their own global average temperature estimates for a bunch of different combinations of globally-scattered stations. Skeptics with sufficient programming experience should not have much trouble coding up their own gridding/averaging routines from scratch, in their favorite language. They can even use python -- it's amazingly fast for a "scripting" language. Skeptics without programming experience will just have to trust and run code written by others.
####### 4 Where to get that data ##########
NASA and NOAA have had long-standing policies of making all of the raw and adjusted temperature data they use freely available to the public.
The GHCN monthly data can be downloaded from here: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/
The GHCN daily data can be downloaded from here: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/
Note: Unless you are a masochist who wants to write a lot mind-numbing low-level data handling code, just use the monthly data.
-----------------------------------------------------
The whole ball of wax (including GHCN V3 data) can be downloaded from tinyurl.com/WattsBusterProject
Need to give a shout-out to the authors of the socket code and the QGIS software package -- most of the "heavy lifting" was done by those guys. (QGIS and python are *really* cool, BTW -- this project was my first serious intro to python). Also, here's a quick summary (minus a lot of cluttering details) of the installation/operation procedure
1) Make sure that gcc/g++, gnuplot, X11 and QGIS are installed.
(Easiest on a Linux Box)
2) Unpack the WattsBuster zip files
3) Launch QGIS.
On the top menu bar, go to Plugins->Fetch Python Plugins
Find and install the Closest Feature Finder plugin.
4) Shut down QGIS
5) Copy the closest_feature_finder.py file supplied in the WattsBuster package
to the appropriate QGIS plugin folder (typically ~/.qgis/python/plugins/ClosestFeatureFinder/)
6) In the WATTSBUSTER-1.0.d directory, build the anomaly.exe executable as follows:
make clean; make
7) Launch the executable per the supplied script:
./runit.sh
8) Launch QQIS
Load the supplied QGIS project file (GHCNV3.qgs)
Start up the Closest Feature Finder plugin and connect to the anomaly.exe
process.
9) Select a GHCN station layer and start ctrl-clicking on random stations.
This is not a polished end product; think of it as a "proof of concept" prototype..
I wonder how hard would it be to create a Voronoi diagram from the station positions and weight each station using the area of its voronoi cell.
ReplyDeleteRegards, Millicent
Voronoi - well, not too hard, and there is a slightly simpler alternative. Some links here, and here. The second link uses Voronoi-like weighting to get a 60-station index.
ReplyDeleteI think the 'oversampling' issue is one that is completely ignored. The number of stations needed for an accurate representation of the global temperature and the temperature anomalies needed to report on global warming is relatively small. Our information on global temperature is accurate. The real gaps have been the polar areas, which means that warming in the Arctic has been missed by CRUTemp until recently.
ReplyDeleteEh?!
ReplyDeleteHaven't Watts and his minions been saying for years that it's not possible to calculate an average global (or by extention a fraction thereof) temperature?
Or is it that there's still no such thing as an average global temperature, but if there was it would be the values calculated by Wattswiget?
I think that this warrants the recognition of Schrödinger's cake, where one can have one's denialist claim and ignore it too...
Bernard J. Hyphen-Anonymous XVII, Esq.
(First, mucho thanx to Eli for taking the time to post about my little project.)
ReplyDeleteJust want to throw out some random thoughts about this little project:
I have found that the global-warming signal in the surface temperature data is so strong that it will "jump out at you" even with the crudest processing techniques.
I implemented the simplest possible gridding/averaging procedure -- got around those "interpolate to empty grid-cells" hassles simply by making the grid-cells bigger! A lousy approach for "fine-grained" regional temperature estimates, but works surprisingly well on global-scales.
After reading about how Nick Stokes was able to compute decent global-avg estimates from ~60 stations, I tried it myself with my little app -- and saw pretty much the same thing that Nick saw -- a tiny number of stations gives you results surprisingly similar to what you get when you use thousands of stations, even with my super-simple implementation.
So that got me to thinking -- could an app be cobbled together that would allow someone to pick a few dozen stations at random via mouse clicks to roll his/her own global temperature estimates?
Not being a GUI/web-app guy, I put the idea on the back-burner -- until I discovered QGIS (www.qgis.org) -- it ended up providing me a "path of least resistance" to this goal. Experimented around with it a bit, and found a QGIS python plugin that did most of what I needed to implement this idea. Hacked at the plugin to give it the additional features I needed (i.e. extract the WMO ID of the station being clicked on and send a command via TCP to my global-temperature app telling it to update the global-avg temp estimates with that station's data).
Took my simple gridding/averaging app and "wrapped" it in TCP server code. The "server app" listens for commands sent from the QGIS plugin and updates/displays the global-avg results accordingly. The QGIS "client" (with clickable temperature stations on a map) and the global-temperature "server" (with the global-avg temperature display) can be run on separate machines (giving you twice the display real-estate).
Fun party trick: Set up client and server machines so that the individual clicking on stations can't see the results. Have a kid at the "client" machine click on random stations while the adults watch the global-average temperature results being updated on the "server" machine converge surprisingly quickly to the NASA/GISS results (and then try to figure out how that kid got mixed up with AlGore and the UN).
It really is remarkable how few stations are required to replicate the NASA/GISS long-term temperature trend. Also, in spite of the fact that for individual stations the raw and adjusted data may look quite different, by the time you've averaged a few dozen globally-scattered stations together, the raw/adjusted differences largely cancel each other out. (Adjusted data does produce a bit larger warming trend than raw data does -- for good reasons, but the differences really are quite small).
Interesting additional note -- with my app, the raw data results tend to align with the official NASA/GISS results a bit better than the adjusted data results do. So much for NASA/GISS relying on data "adjustments"!
Anyway, once this app is set up and running (a bit of a project in itself, depending on your machine and command-line comfort), with a bit of imagination and a series of mouse-clicks, you can interactively shoot down all of Willard Watts' major claims about the global temperature record.
And one final bit of advice to all those killer Rabetts out there -- Should Watts lobbest a Holy Hand Grenade of Antioch in thine direction, simply pulleth out the pin and lobbest it back!
--caerbannog the anonybunny
Most of the heat is in the oceans, either as heat of fusion in melted ice or warmer ocean waters. (Melting an Arctic ice shelf does not change sea level, but it takes a bit of heat.)
ReplyDeleteA good bit of the heat in the atmosphere is in latent heat that is not apparent in reported air temperatures. E.g., a small rise in relative humidity indicates a significant rise in heat content.
In short, the post is a very conservative approach.
Absent a head of lettuce or Sir Bors, all i can offer the anonybunny by way of refreshment for his labors is a a live entertainment poster
ReplyDeleteDr. Lumpus Spookytooth, phd.
ReplyDeleteNot sure what the purpose of the post is other than to reinforce problems skeptics have already pointed out.
"But it should be made clear that the NASA/NOAA/CRU are more interested in quantifying how average surface temperatures have *changed* than in just calculating the "Earth's temperature".
what ho! but of course, because we don't want anybody to know that earth is below GAT and average levels of atmospheric co2.
"It should be emphasized that we aren't interested so much in calculating the Earth's absolute temperature as we are in estimating how it has *changed* over time."
Again, because it destroys your narrative. Secondly, Eli and friends are not at all interested in earth's temperature change over time, just the last million years really. Otherwise, why adopt the company line at Rabbet Run "but co2s the highest its been in the past million years" Unfortunately, those who studied under the auditor know that in the last 600 million years, the only period with co2 levels this low was the carboniferous.
Dr. Lumpus Spookytooth, phd.
ReplyDeleteI'd also note that there's some serious hypocrisy going on here. How can you make fun of Anthony Watts and then present Caerbannog who is a Marilyn Manson impersonator as a credible source of climate work? Let's take a look at a real rookie mistake here.
"3) The "dropped stations" issue pushed by Watts is a complete non-issue. Compare results computed with all stations vs. just the stations still actively reporting data (separating them out is a very easy programming exercise), and you will get nearly identical results for all stations vs. just the stations that haven't been "dropped".
Whoops! Apparently, it is a serious issue because you can't compute all results! How can you compare a station actively reporting data with a station that has been dropped? You can't! ho ho ho ho!
OWNED!
Dr. Jay Cadbury, phd.
ReplyDelete@Caerbannog
Please explain the changes that NOAA has made to the surface temperature stations to increase there range. After all, we've gone from 6,000 to less than 1500, so surely they updated the remaining stations to measure greater distances right?
Evaluating the "dropped stations" effect is easy:
ReplyDelete1) Compute global-avg results using all stations (per NASA/GISS).
2) Repeat the computation using only stations that are currently reporting data. It should be obvious to most (but not necessarily all) folks that stations currently reporting data haven't been "dropped".
3) Compare the results from (1) and (2) -- you will see very little difference between the two.
--caerbannog the anonybunny
It's clear to me that Dr. Lumpy must have shares in Jurassic Park. He's happily watching temperatures go back to "GAT" (whatever that is supposed to mean) so that he can release his reconstituted dinosaurs back into the wild. They need the higher temps to thrive.
ReplyDeleteDr. Jay, NOAA has been working very hard to increase the "there range" of their temperature stations, I'm glad you asked. They use a scanning tunneling there-ray sensor with a new design power-boost amplifier to increase the transmission range. As well, receiver technology has greatly improved over the last decade (yet another spin-off from NASA!) allowing the detection of far weaker there-ray signals. The increased transmit power, and better signal-to-noise has allowed NOAA to increase the there range of their sensors by up to a factor of 6.28 in best case geometry.
ReplyDelete
ReplyDeleteAfter all, we've gone from 6,000 to less than 1500, so surely they updated the remaining stations to measure greater distances right?
One of the things that the software I put up can do is compute global-average temperature estimates from very small subsets of the temperature station network.
For example, one option (when run in "batch" mode) is the ability to select a single station per lat/long grid-cell (the grid-cell size can be specified by the user).
This way, if you specify very large grid-cell sizes, you can compute global-average estimates from just a few dozen stations (while ensuring reasonably uniform global coverage). Do that and you will still get results that line up with NASA's -- even though the average distance between stations you've processed can be as much as 3000 km (or more).
Of course, the pros who really know the data have been aware of this for many years.
--caerbannog the anonybunny
"Schrödinger's cake".
ReplyDeleteThan you, Bernard. If ever the denier circus had to be summed up in two words, there could be none better expressed than that.
Again, because it destroys your narrative.
ReplyDelete...says a commenter who shows no sign of understanding what that "your narrative" actually is.
But that's par for the course in them thar hills.
A challenge for Cadbury/Spookytooth -- download the WattsBuster zip file, and install the software per the detailed instructions in the README files.
ReplyDeleteIf you have a Windows/Mac machine with lots of memory, install VirtualBox (virtualbox.org) and then use it to set up an Ubuntu Linux (www.ubuntu.org) virtual machine. Then install WattsBuster on your virtual machine. That's probably the easiest way to go.
Should be slam-dunk easy for a sharp phd such as yourself.
Then run the WattsBuster app per the instructions provided -- select any 50 "long record" stations at random (use the "from 1885" and "to 2010" GHCN station layers in QGIS).
Upload an image of your results to imageshack.com and post a link to it here.
--caerbannog the anonybunny
Another solution is to use a live CD/DVD - a bit less practical, but it avoids the installation of Virtualbox on a computer.
ReplyDeleteAnyway, thanks caerbannog, you actually inspired me to do something similar.
I saw several claims that "warming stopped for the last 15 years". I knew that climatologists already debunked that nonsense, but I decided to have a go at proving it myself. So I wrote a crude Matlab script (yes, I know, I'm lazy), tested it on HadCRU4 and GISTEMP datas, uploaded it on pastebin, and said "well, here is my code, here are the datas, I find that the probability to have a zero trend is negligible for the 1995-2010 period. If I botched my code, please tell me so".
Reactions were interesting :
- "then try 1980, 2000, try to calculate trends for ALL times". I provided them a code, but they are not willing to use it, I have to do all the work instead. Typical McIntyresque.
- "why should the official science tell me there is warming while my EYES see there is not ?" This is a phenomenon also observed by zeteticians : the overconfidence on the eye's capacity not to be fooled
- a goalpost shifting "what is the strength of this "faux plat" (false flat line - sorry, the discussions were in french, and I am not fluent enough to translate correctly the idiomatic expressions) anyway ?" . I however gave before the best estimate, which was not far from the 30 years estimate ...
- and then, the silence. Or, to be more precise : a sudden concentration on someone else's comments. Needless to say, I did not code for nothing, so I won't let the hook off.
Surprisingly, I did not have any comments on the code itself, although I know I have something wrong with my error estimates - they are too small. This does not affect the result, but it kinda bothers me.
Maybe they are dissecting this in their hideout. Maybe they do not even know how to analyse datas.
I let others draw the common elements in both experiments, although Caerbannog's code is far more advanced than my 30 lines script riddled with errors ...
Side note : if anyone can redirect me for a correct source for the estimation of error bars of the trend value ? I did the following :
1) estimate the variance sigma² of the detrended signal using the best estimated linear fitting
2) for several trial trends b:
- calculate the model T=b*t+a
- calculate the square residual (data-model)²
- calculate the non-normed cost function f(b)=exp(-residual(b)/2sigma²)
3) we obtain the cost function, from which we can calculate the probability that the trend is negative
P=sum(f for b<=0)/sum(f)
I think the problem lies with the sigma² estimation, but some quick lectures did not give this impression ... any help, please ?
bratisla