There is, quite naturally, a confusion about how modern (post Eisenhower) measurements of CO2 have been made (there are changes underway). Even, for example, Spencer Weart, in his Discovery of Global Warming, gets it wrong
Keeling wanted to buy a new type of gas detector (namely, infrared spectrophotometers) that penned a precise and continuous record on a strip chart.Eli was reminded of this issue in a comment by one Izen over at Confused* Judy's place. (What, you doubt that Judy is Confused, Judith Curry, former chair and professor of atmospheric sciences over at Georgia Tech. Well, go read what Brian wrote last night first, and then what Confused Judy wrote yesterday about 2014 being the hottest year on record.) Izen:
I have no doubt that chemical and physical methods used in the past could be refined to a high accuracy. The technology of laser spectroscopic measurement was a more recent development than atom splitting. That provided Keeling with a better method.But the importance of maintaining that accuracy and maintaining control over the conditions, time position, that the measurements were made is Keeling’s contribution to the science.Now anybunny familiar with the spectrophotometers of the time, IR, Vis, UV (Beckman DU folks) would have some doubts about what kind of accuracy even one so obsessed as Charles Keeling could achieve.
To understand what and why start with a working definition that spectrometer disperses (or shuffle in the case of FT spectrometer) light of different frequencies so that the response of the sample can be measured as a function of frequency. As to the measurement, there are in principle two types of measurements that can be made. The most common one is absorption, where the intensity of the light at different frequencies/wavelengths, with and without the sample are measured. The second is excitation, where the response of the sample to the light is measured, typically by fluorescence or ionization, but also by the noise it makes when excited. Bunnies knew all about this from the year dot, but the photo-acoustic effect was first described by Alexander Graham Bell.
The advantage of an absorption measurement is that it is absolute. You only need to measure the relative amount of light with and without the sample and use the Beer Lambert law
where A is the absorbance and I and Io the intensities with and without the sample. The absorbance has a simple relationship with the cross-section σ, the length of the sample l through which the light passes (use your ruler) and the concentration of the interesting stuff N (and sometimes interferences in the sample)
The difficulty of using absorbance for accurate and precise measurements of small concentrations is that if the difference between I and Io is small the absorbance is small and you would need to measure the light intensity to a precision and accuracy much higher than the electronics of yesteryear would allow and even today would be tough.**
Excitation spectroscopies are more sensitive, because in the absence of whatever absorbs the light the baseline is zero. There is a cost. Of course there is a cost, what do you think that there is free lunch at the hutch? The cost is that you need a calibrated sample whose concentration is accurately known so you can compare with. Although David Keeling had developed chemical methods and skill to measure CO2 in atmospheric samples accurately, each measurement was painstaking and for the kind of measurements that were needed in the pilot Mauna Loa program something better was needed.
Which is where Alexander Graham Bell, and a new instrument developed by V.N. Smith for IR measurements of various gases comes in. Jones realized that if what you want is to measure the effect of a gas on the intensity of light passing through a sample, you did not have to disperse the light and measure the effect at a single frequency, but rather you could compare the intensity passing through cells with and without the absorbing molecule
In general, however, I is a very small fraction of Io, so that a detector which is responsive to all wavelengths will be irradiated by a large amount of energy Io in the absence of X in the absorption cell, and by only a slightly smaller amount of energy I in the presence of X therein.
This difficulty can be overcome by irradiating two detectors, one through an absorption cell. containing X, and the other through an empty cell, or a cell containing a non-absorbing gas. The difference in the amount of energy received by the two detectors will be I, and a more sensitive indicating or recording instrument can be applied to the output from the two detecting elements. Although the calibration in this case will vary with the total energy Io, since the absorbed energy ex is a fraction thereof, this additional difficulty can be in turn overcome by using the null principle, that is, by stopping down the energy passing through the empty cell until the energy difference in the two detectors is zero, and then calibrating the action of the optical wedge used for this purpose in terms of concentration of the component X.This is called the Non-Dispersive IR method, often written as NDIR and is the basis of a whole raft of modern NDIR meters for monitoring CO2 and other gases in many application including medical agriculture and such. The characteristic of such meters is that they use a thin film filter to restrict the wavelength range of the light that reached the detector or they are used in a situation where there is only a single absorber.
The first is a precise and accurately known calibration sample, a gas whose concentration is exactly (or as exactly as possible known). The current state of the art is described by NOAA where Peter Tans maintains the WMO international standard. IEHO, the lack of good calibration standards to test their results was a major failure of most of the pre-Mauna Loa CO2 measurement series. NOAA, relatively recently took possession of the standard cylinders from the Keeling labs at Scipps (1995). A calibration standard is a necessary bullshit test of any measurement. Still, anyrabbit who has ever made up calibration mixtures, especially at low concentrations knows that this is not bunny play. There are any numbers of materials issues, worrying about absorption and reaction on surfaces, issues associated with ensuring that the mixture is homogeneously mixed and don't talk about the issues with pumps.and any moving part in the system. Maintaining a sample over long periods is a horror which requires constant rechecking and not a little bit of hard experience.
The second, not so obvious, is the flow system that brings the sample and calibration gases into the cell. Again, materials are a major issue to build a system that samples from where you want the samples to come from, monitoring of meterorology, etc. Since measuring between the sample and the calibration gas are alternated (see this description of the NOAA SOP) if the measurement is to be automated this has to be done with electro (mechanical) valves. Design of the flow and sampling system is absolutely crucial
The third is REALLY obvious, you have to freeze out the water from the flowing sample gas.
* as in the George Bush sense
** as a side note trying to measure very high absorbances requires measuring a very small signal, down in the noise as it were. On commercial instruments ( which use base 10 logs to report absorbance) unless you have paid a lot for a special don't believe an absorbance base 10 above 2. Cut the concentration or make the length longer.