Thursday, January 21, 2016

Hmm. . . That's Suspicious


Eli being a glutton for punishments and carrot was easily led astray by the likes of Barry Bickmore describing a usual Moncktonian perambulation (aka a trip around the mulberry bush). Seeking amusement Eli  was going thru the comments at Willard Tony when this appeared in the LCD from Roy Spencer

"The quoted statement is incorrect as it stands. PRTs are used to measure the temperature of the onboard (warm-point) calibration targets. The cosmic background (cold point) is assumed to be 2.7 K (or something close to that..it doesn’t really matter). PRTs are laboratory standard and highly stable, each one being carefully calibrated before launch. 
Those two calibration points are used to calibrate the Earth-viewing data.and the AMSR-E calibration is a special case of poor design…the warm target was made of a material with low thermal conductivity. The instrument was designed in Japan by engineers just coming up to speed on the technology, and it should never have been approved by NASA in the first place. But, the instrument was “free” to NASA, so there was less scrutiny. I say all this as the AMSR-E U.S. Science Team leader."
Dr. Roy was explaining to the assembled WattKnots how the (A)MSU system works by interpolating the signal between deep space (2.8K) and a hot target.  The hot target has a number of platinum resistance thermometers buried in it and a pseudo black body surface (black is the most difficult color).  The targets are technically complex.  They are not anywhere as laboratory standard nor highly stable as platinum resistance thermometers. Eli's comment which they let through
If the warm target is made of a material with low thermal conductivity the implication is that it could slowly age, e.g. it’s thermal conductivity could change and thus the temperature distribution across the warm target could slowly change. With eight prt’s this should be observable.  If there is a temperature distribution across the warm target, the detector could be looking at a varying emissivity. Just sayin.
Now as the regulars recognize the Bunny has been playing with the idea that there is some sort of long term drift in the microwave sounding units or the analysis of data from these units, so this was a new item in play.

It turns out that there are many opportunities for changes, including changes in emissivity of the target coatings.  One of the secret sauces in analysis of AMSU returns is figuring out on station the non-linear gain of the antenna from the two calibration points.  Scott Church made a long study of the AMSU system which describes the mess best described as TL:DR, but the bottom line has a name, Instrumental Body Effect.  There is certainly enough room for all sorts of mischief.

Tell Eli about the gold standard.

22 comments:

Kevin O'Neill said...

Yes, it has been a problem from day one. From the AMSR-E Instrument Description:

"AMSR-E's calibration system has a cold mirror that provides a clear view of deep space (a known temperature of 2.7 K) and a hot reference load that acts as a blackbody emitter; its temperature is measured by eight precision thermistors. After launch, large thermal gradients due to solar heating developed within the hot load, making it difficult to determine from the thermistor readings the average effective temperature, or the temperature the radiometer sees. The hot load temperature is not uniform or constant, and empirical calibration methods must be employed."

Harry Twinotter said...

I find it astounding that some put so much faith in a temperature proxy measured from something like 100km away?

A silly example is it would be like a weatherman measuring the temperature of a neighboring city using microwave emissions from the oxygen above the city, and using that in their weather report.

I know arguments from personal incredulity are a logical fallacy, but still!

David Sanger said...

And yet, Harry, astronomers are detecting the most minute gravitational effects, which might be due to a ninth planet up to a trillion miles away.

The issue is not whether precise measurements of atmospheric temperatures can ever be made from space. It is just whether the existing devices are as accurate as they were thought to be, or were designed to be.

Harry Twinotter said...

David Sanger.

I agree. I am still happy to accept the satellite measurements as they are somewhat consistent with other measurements, even if they do appear to have technical issues.

I thought it cunning of Dr Roy Spencer to describe the satellite sounding units on his website as if they were some sort of high-tech thermometers. I don't think he mentioned the "temperatures" they were "measuring" were over 700km away.

As for the ninth planet, it is a good example of a scientific hypothesis. I will only believe it if they manage to take a photograph of the object; I am sure they are looking.

Everett F Sargent said...

David Sanger,

You really shouldn't mix up entirely different technologies.

Take it from an old (1st life) land surveyor, humanity has been measuring distances and angles for a very long time, several millennia, in fact.

A ninth planet is inferred from the exact same types of measurements ...
International Celestial Reference Frame
https://en.wikipedia.org/wiki/International_Celestial_Reference_Frame


But wait ...
there's a bombshell ...
there's always a bombshell ...

AIRS/AMSU/HSB on the Aqua Mission: Design, Science Objectives,
Data Products and Processing Systems
http://www.atmos.berkeley.edu/~inez/MSRI-NCAR_CarbonDA/papers/barnet_refs/Aumann_IEEE8big15aug02.pdf
http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1196043&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1196043

Lot's of rather interesting numbers to be had in that one paper, like maybe <1K rms (under clear and partly cloudy conditions) and "1K/1km" and "Ultimately the longterm validation and monitoring of the radiometric calibration accuracy at the 0.2K level required for climate studies will exclusively use the surface marine reports [24]."

Oh look, there's the benchmark(s), SST measurements (they also mention RAOB elsewhere's). So they are talking like ~1K but need 0.2K for climate studies.

The main story might have to do with something about the goose that laid the golden egg (you don't tell your funding sources that developing a TLT temperature climatology is a 'bridge too far').

Victor Venema said...

David Sanger: "It is just whether the existing devices are as accurate as they were thought to be, or were designed to be."

These satellites were designed to measure humidity for meteorology, not to estimates temperature for climatology.

Chris G said...

> Oh look, there's the benchmark(s), SST measurements (they also mention RAOB elsewhere's). So they are talking like ~1K but need 0.2K for climate studies.

Atmospheric sounding is hard. Speaking as someone who worked AIRS (and some AMSU) data to retrieve temperature and water vapor profiles, +/-0.2 K was about as good as it got. That was a decade ago I did that work and I don't recall what range of altitudes/pressure retrievals were that good but I'm going to say it was in the stratosphere. Accuracy was significant less at low altitudes/high pressures and really went south in the boundary layer. +/-1 K near surface over land sounds about right as a typical error. (Again, my memory is a decade old so, not that you would, but don't take it as gospel.)

I only worked AIRS/AMSU data for maybe 3-4 months so I'm by no means an expert on sounding but it was a enough time to get the lay of the land, understand techniques and challenges, and a get a sense for who was doing good work. Clive Rodgers' "Inverse Methods for Atmospheric Sounding" was my bible.

Hank Roberts said...

> and yet astronomers are detecting
> the most minute gravitational effects

You're a photographer. You know the resolution of images from a telescope and how much better that is than these microwave instruments.

Russell Seitz said...

Could Eli nose out the operating temperature range of this gizmo in orbit ?

The thermal conductivity of both metals and insulators goes up as the ambient temperature goes down , and waay up as temperatures fall below 100K

(There are reasons physicists prefer K , and the characteristic temperature of materials, theta Debye, which figures in phonon heat transport is one of them.)

EliRabett said...

BTW 275 and 300K see here Figure 4.

http://onlinelibrary.wiley.com/doi/10.1029/2011JD016205/full

Given the issues any accuracy and precision is a testimony to some clever folk.

Russell Seitz said...

Next time they should incorporate a cvd diamond heat spreader to iron out the kinks -

By the time they get to orbit, all performance enhancing materials are cheap, and anything goes for planetary probes- a major engagement ring died to give Pioneer Venus a peep at the surface.

E. Swanson said...

eli, the AMSR-E warm target isn't the same as that used by the MSU/AMSU instruments. Spencer's reply was similar to a comment long ago from EOS (2001):

"A. Shibata (JMA/MRI) had the difficult task of explaining a problem with the warm calibration target (hotload). The AMSR and AMSR-E warm targets are manufactured from a material with a thermal conductivity of 0.13 W/m/K. (SSM/I’s target had an epoxy covering an aluminum core with a thermal conductivity of 1.37 W/m/K) The plan is to move 2 PRTs (Platinum Resistance Thermometer) from inside the pyramids to the outside surface of the warm target, and to develop a method for calibration of the data that has two independent variables: temperature of the instrument and channel frequency."

http://landval.gsfc.nasa.gov/pdf/EOS_obs_may_jun01.pdf

Those pyramid like projections improve the emissivity of what ever surface is applied to them, since they tend to to emit and adsorb multiple times before the microwave energy exits the target. This is microwave energy, not visible, so color is not a good description of the required surface property.

There are also problems with the MSU/AMSU warm targets, such as the effects of moving from from the sunlit side to the night side during each orbit and changes in LECT, etc. Some years back, I happened on some data collected during thermal vacuum testing of an MSU. The instrument output while seeing the cold target was quite large, IMHO, likely caused by emissions from the instrument itself. Those emissions would also impact the data while viewing the warm target, so the brightness temperature scale depends on building a calibration curve to remove the noise. Of course, that noise is still in the data, which one might think would make the real signal-to-noise value quite low. I haven't seen any other real measurement data, so my impression surely has been put to bed by now. Here's some discussion:

http://www.star.nesdis.noaa.gov/smcd/emb/mscat/algorithm.php

EliRabett said...

Thanks Eric. It is all indeed TL:DR. but a fair amount of it appears to be on line, the problem is that without immersion it is hard to separate wheat from chaff.

As to the coating, there has been a fair amount of progress using nanopatterning, but the stuff may not be space qualified yet.

What is your opinion of changing snowfall coverage in the spring and summer having an effect? (winter and fall appear pretty constant)

Russell Seitz said...

In the back of the silver bullet magazine I found a small jar of hexagonal boron nitride, which combines serious anisotropy of of thermal conduction and , though an insulator, specular reflection near the 300k black body peak via a giant restrahl resonance. It looks white by the way.

EliRabett said...

2D version of course.

Russell Seitz said...

White graphite-- BN is isoelectronic with carbon

E. Swanson said...

I hadn't seen Scott Church's long article. It was gratifying to see he took note of my 2003 paper, which might best be described as a "one hit wonder". Anyway, as I haven't been motivated to attack the snow which has buried my D/W and am thus marooned, I looked around for information on Dickie radiometers, the sort of electronic device used in passive microwave sensors.

I'm not an EE, so the reality of the "counts" produced by the instrument is still a mystery to me. The circuits appear to end with an integrator, perhaps a voltage to frequency converter and a counter, which, I presume, is reset at some predetermined time interval, thus giving one "count". The resulting total of "counts" during a stop at each scan position leads to the calculation of Tb by comparing the measured value with the number of counts at the hot and cold targets via interpolation. Then, there's the additional problem with the calibration curve(s) for each instrument and each frequency.

Spencer and Christy have repeatedly stated that the MSU provides measurements to 0.01K accuracy, which is the accuracy of the PRT elements on the hot target. But, the precision of the electronics would seem to be limited to the quantization limit, that is the Kelvin equivalent of one "count" from the electronics. One would need to look at the raw data for the deep space view and the hot target view to ascertain this value.

HERE's a report documenting the thermal vacuum chamber calibration of the first AMSUs. Looking thru that report, in Table 3, one finds that the estimated number of counts when viewing deep space is about 13,460 for Channel 6, whereas the measured value for channel 6 is about 17,500 at 280K (Figure 7). Both values depend on the temperature of the instrument. So the scale of the instrument from 2.7K to 280K is about 4,000 counts, whereas the "noise" is about 13,000. This range appears to indicate that the resolution of the instrument is about 0.07K, if I'm doing the calculation properly. There's much more information in the report, such as the corrections for the non-linear nature of the instruments. Of course, all this is stuff which happened before the instruments were put into orbit...

E. Swanson said...

After some hours googling and ducking around the 'Net, I've yet to find a detailed description of the integrator electronics in the AMSU Dicke switched radiometers. Note the correct spelling of "Dicke", (after the man). I did gain a greater awareness of the output in "counts" and also the term "NEdeltaT" (or noise equivalent delta temperature (NEDT)) presented in Table A-2 of the calibration document. That term has also been called "the minimum detectable change in brightness temperature" and in the design spec this is required to be less than 0.25K. The calibration tests produced a value of 0.15 for that term, which would be about twice the 0.07K I calculated as the delta T of each count.

Also, it may be of interest that there are two AMSU instruments involved in computing the various temperature time series. AMSU-A1-1 scans channels 6, 7, and 9-15, while AMSU-A1-2 scans channels 3,4,5 and 8. Given that the new UAH TLT v6 is a combination of AMSU channels 5, 7 and 9, we see that this calculation uses the data from completely different sources, whereas the old MSU series all used channels from single instruments. No problems there, S & C have it all worked out, I'm sure...

E. Swanson said...

I've been toying with the latest UAH v6 files. I calculated a TLT time series using Spencer's equation as applied directly to the TMT, TP and TLS monthly time series. That equation is:

TLTv6 = (1.538xMSU2 - 0.548xMSU3 + 0.01xMSU4)

The resulting time series were virtually identical to those from UAH, as may be seen in THIS EXCEL SPREADSHEET (warning, 632k). I suggest that the basic question remains unanswered, "How did S & C create this equation?". Of course, there's no way to validate the calculations used to produce TMT, TP OR TLS, except to re-calculate the entire UAH processing chain...

EliRabett said...

Iff forced to Eli would hazard the guess that they started with some sort of balloon sonde series and forced the fit. Of course that means that they were calibrating and testing against the same record which is a real nono.

E. Swanson said...

My supposition is that S & C used the theoretical emissions profiles for their three UAH v6 channels to calculate some optimal combination with the intend of removing the stratospheric contamination from T2, the result being the three channel weights in their equation. In order to make this calculation, they needed some lapse rate profile and my guess is they assumed it was OK to use the US Standard Atmosphere. They likely also used the same process to calculate their 3 channel profiles as well. I have suggested that this approach could be flawed as the real world weights vary with both season and latitude, particularly at high latitudes in Winter.

E. Swanson said...

To add more insight into the new USH v6 satellite data, I used the RSS channel weighting functions and applied the UAH v6 to reproduce the graph shown in my last post. HERE is a graph of the results.

Sorry to say, the plotting package using Open Office would not allow a secondary X axis, so I rotated the graph 90 degrees compared to Spencer's graph, thus one would need to reverse that rotation for a more direct visual comparison. This was necessary as I wanted to plot the temperature vs pressure height for the US Standard Atmosphere which is included in the RSS data files. One can see the simulated tropopause region between (roughly) 300 and 60 hpa and the effect of Spencer's TLT weighting on the resulting TLT v6 profile.

I also played with the emission curves myself, using all three profiles to generate another equation, which is plotted in green as the "ES T-2" curve, which falls between the UAH TLT and the TMT curve at the tropopause pressures. When applied to the UAH v6 data, the result is a slight cooling to the north of the tropics, as shown in this table:

Region UAH 5.6 UAH 6.0 ES T-2 ES TTT RSS TTT UAH TMT
Globe 0.14 0.11 0.11 0.11 0.11 0.07
NH 0.19 0.14 0.13 0.13 0.16 0.09
SH 0.09 0.09 0.09 0.09 0.07 0.05
N Polar 0.43 0.22 0.20 0.19 0.25 0.15
N Ex Tropic 0.25 0.16 0.15 0.15 ----- 0.11
N Mid Lat ----- ----- ----- 0.16 0.16 -----
Tropics 0.08 0.10 0.10 0.10 0.13 0.07
S Mid Lat ----- ----- ----- 0.05 0.05 -----
S Ex Tropic 0.09 0.08 0.08 0.08 ----- 0.04
S Polar -0.02 -0.01 -0.01 -0.01 -0.06 -0.04