Saturday, July 16, 2016

Post Modern Views of Science

A couple of days ago, Eli went off and ranted about an old philosopher, James Blachowicz, and his take on science, which to Eli was, well, given the history of this thing, Eli pretty much thought that Prof. Blachowicz was eating magic mushrooms.

Given the placement in the New York Times, not surprisingly, there were a few other comments, and not surprisingly (to Eli, Eli being always right) they pretty much agreed with Eli

Jerry Coyn, writing under the Rommian title of "Is this the worst popular philosophy piece ever? A philosopher argues that science is no more reliable than philosophy at finding truth" and pretty much concludes that it was

If the NYT can publish tripe like this, perhaps it’s time for someone to pull a Sokal-like stunt, writing a bogus philosophical analysis of science and submitting it to The Stone. But perhaps that’s exactly what Blachowicz has done!
Ethan Siegel in Forbes discusses how Blachowicz' understanding of Kepler (and Galileo) is, to put it softly, lacking and Derek Lowe at Science also has a few words.

For a while now, Eli has been pushing the idea that science is characterized by coherence, consilience and consensus.

The Red Queen can hold any number of contradictory thoughts in her mind before breakfast but that only leads to confusion and is deadly for understanding. The Wikipedia has a useful description of consilience and why it is necessary for reaching a scientific consensus
evidence from independent, unrelated sources can "converge" to strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus.
Kuhn pointed out that the consensus was very hard to change for good reasons, because a lot of evidence (consilience) had to be gathered and linked (coherence) to establish the consensus. That the consensus could be changed on very rare occasions is irrelevant because the overthrow (e.g. Relativity, QM) extended a strong consensus into new areas where it really had never previously been tested.  New science EXTENDS old science.

The tough part of this is that one has to have a strong understanding of the various threads which establish the consensus to appreciate the consensus and its strength.  Many do not.

They interpret the ability to modify the consensus as meaning that nothing can be certain, and if it cannot be certain then all statements are equally valid.  Einstein and Kuhn are frequently invoked.

For example, some, not to be named, but including philosophers and lawyers and liberal arts types claim that it is equally true to claim that the Sun revolves around the Earth as visa versa.

As to the Earth and the Sun, if you believe that the Sun revolves around the Earth, you basically don't believe in Newtonian gravity or the laws of motion.  There is some nonsense that should not go unchallenged, but that takes time.

33 comments:

Kevin O'Neill said...

Eli writes:"For example, some, not to be named, but including philosophers and lawyers and liberal arts types claim that it is equally true to claim that the Sun revolves around the Earth as visa versa."

Some? 1 in 4 Americans gets the answer wrong, but before we gnash our teeth in frustration we should consider that the number is even worse in the EU -- 1 in 3.

I would chalk most of this up to simple ignorance.

See the NSF's Science and Engineering Indicators 2016, Chapter 7, Public Knowledge About Science & Technology

8c7793aa-15b2-11e5-898a-67ca934bd1df said...

It's the lead in the water and formerly the air. Dental amalgam probably has something to do with it as well. Then there was Ronnie.

You just can't make this stuff up. These people believe this crap.

Fernando Leanme said...

I find the emphasis on consensus somewhat repulsive. Maybe it's my background, I was educated in a communist system where a rigid Marxist framework, including pseudoscience, was imposed by brute force. I was aware of the weaknesses in the official story line because my dad kept giving me books which contradicted the garbage I was fed at school. So, I ran away when I was 14 and ended in a refugee camp for minors who had escaped from workers paradise. Thus I'm engineered to challenge authority and despise the status quo.

I like to bring up individual cases where an individual fought discrimination and the existing dogma to achieve a breakthrough...and because its such a neat story, I offer you the following on Payne Gaposchkin:

https://www.aps.org/publications/apsnews/201501/physicshistory.cfm

metzomagic said...

Well, at least we now know one reason why Fernando denies the scientific consensus on AGW.

Bryson said...

One problem with the views criticized here is a bit subtler (tricky, even). The philosophical tradition has demanded truth as a fixed, final result, and the only criterion for full epistemic success, with anything short of that being just plain false. It's clear that science doesn't work this way (hence the popularity of talk about 'approximate truth'-- an oxymoron, as Larry Laudan pointed out long ago: what's only approximately true is just not true. What's worse, if we haven't got the right language, including sortals and predicates that enable us to pick out and describe real things the way they really are, we can't even formulate sentences that are candidates for truth, strictly speaking. Of course all this brutally tramples our usual usage and understanding.

To me at least, a pragmatic approach seems the best alternative here: we have well-justified confidence in the reliability of many theories and models, applied in a wide range of circumstances. Kepler's model of planetary orbits was a big step up by this criterion, Newton's was another big step (big enough to successfully guide navigation, now that we have the tech to test that) and Einstein's yet another...

'Truth' becomes a kind of limit concept-- a true account would be perfectly reliable in all cases, which is a difficult standard to test for, and (what's more important) not the basis for our current confidence in any present-day theory or model. But our confidence is still justified: the scientific theories and models we've arrived at allow us to describe, predict, reason about and explain what happens in our world incomparably better than natural languages and common sense ever could...

JohnMashey said...

1) My favorite example of a (developing) consilience building a hypothesis into a consensus theory, except for the usual few holdouts: The Early Anthropocene Hypothesis: An Update

2) On the other hand, what's this nonsense about the Sun revolving around the Earth ... when one can have a flat-Earth?

Allan Margolin tweeted this (good) cartoon:

I retweeted with note:
And here is their favorite "flat-Earth map", via Sen Inhofe
MedievalDeception 2015: Inhofe Drags Senate Back To Dark Ages

Within 2 hours I got a tweet from FlatEarthCity:
"@JohnMashey #earth is flat , stopped being scammed by #scientism"
accompanied by video from 21-mile high balloon "proving" it.

@FlatEarthCity has 2200 followers and is a cornucopia of ... something.

Now, of course, being an old bunny here, the first name that crossed my mind was Poe ... but if so, very dedicated: 47K Tweets since April 2016

EliRabett said...


Bryson, indeed there is an urgent need for language reform to deal with such problems, although not going to happen. IEHO and not entirely to be flip, this was well described by John Bell in his essays on the speakable and unspeakable in quantum mechanics. Our language cannot handle such issues, any language.

You also appear to be describing an existential crisis for philosophy which inevitably will drive the field to denialism of everything.

Aaron said...

"Sun revolves around Earth" is the mild form. I once had girlfriend that thought the Universe revolved around her.

The hard thing is when a lawyer (math major) with a JD cannot tell the most basic science from fiction. Then "truth" becomes whatever his client wants it to be, which is not so different from my old girlfriend.

Thus, he must rely on his advisors on a wide range of science topics where there are differing accounts. However many of his advisors/friends have liberal arts background without much science. (Now retired, he was one of the best construction lawyers in the country, and he got rich.) The point is that understanding reality is not required for much of modern life - what is required is understanding how people think.

Somebody has to understand reality, but they are not likely the ones that get rich.

Jan Galkowski said...

I don't know why these kinds of debates roll on. Sure, I understand speculative debate about the first seconds after the Big Bang, or what preceded Moment Zero, or the kinds of things which String Theory tries to answer (although not String Theory itself, which I feel is a lost cause).

But that this physics, say, allows us collectively to do the engineering it takes to build semiconductors says something quite fundamental about its reality. Or land a spacecraft on Mars.

Sure, these are things which, at least in the last case, may not be useful to everyone, but that they can be done indicates there is much more to these abstractions than philosophy.

And, in my opinion, this may be some kind of extension of the debates Milton used to wallow in, captured in Paradise Lost and such and the preacher offers in Sagan's Contact. Sure, I dig the stuff about having a ``frenetic culture''.

It seems that cultural history has lacked a serious thread of the pertinence of Mathematics, Science, and Technology. Maybe that happens out of ignorance, or maybe it happens because this is consistent with one world view that these are the tools of Evil White Male Europeans. I dunno.

I do know of one case in which Europe collectively gets little credit, and I thing this ought someday to be rectified. No one seems to properly understand the huge advance Calculus brought to society and the world, and I think that's an invention which really should be lauded. I mean, I'm just sayin' ...

Susan Anderson said...

Personally, I like it without all the complicated language, like this from/about Richard Feynman.
https://philosophynow.org/issues/114/Richard_Feynmans_Philosophy_of_Science

"science is a lived activity and it has an inexpressible aspect"

"science is neither its content nor its form"

"Feynman describes judgement in science as the skill to “pass on the accumulated wisdom, plus the wisdom that it might not be wisdom ... to teach both to accept and reject the past with a kind of balance that takes considerable skill. Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation”

"What Feynman calls ‘wisdom’ I would call ‘tacit understanding’. Under a conservative view it’s hard to accept that science is not its methodology or the knowledge it generates. These are certainly the by-products of doing science, but for Feynman they are not science itself. Science is not merely its form, method, past exemplars, or the beliefs and knowledge it generates, for these change when great discoveries are made."

Ambulator said...

You can assume the Sun goes around the Earth and get the same answers, but the math is a lot hairier. You're usually much better off using an inertial reference frame, but it's not required.

Jan Galkowski said...
This comment has been removed by the author.
Jan Galkowski said...

+Ambulator It is required if you want to do something like track all the Earth-orbit-crossing asteroids using any stack of servers you can afford. Believe it or not, parsimony actually has physically tangible benefits, and much of the field of Computer Science is devoted to exploring those.

Bernard J. said...

What Ambulator said, especially about inertia.

Speaking of, and tautologically, denialism must be especially dense, because it displays a very significant inertia...

Bernard J. said...

And now I refresh and see that Jan also preempted...

That's the thing about RR - you stand in front of a burrow long enough and there's sure to be a wise rabbit or two who point to the elephants.


Fernando, talk to someone about your childhood experiences. They're burdening you with confabulations of thinking, which are in turn interfering with your potential capacity for objective and impartial analysis.

Bryson said...

Eli, I'm happy to say that I don't see philosophy going down that particular rabbit-hole. Not to say that some won't-- exploring all the space of ideas (even silly ones) is part of the business model-- but there is a substantial core (a majority, at least in my circles) of philosophers who recognize that science has provided a much better account of the world we live in than any we can express in a natural language. There are tricky bits involved in fitting this together into a coherent position (but that's catnip for us; despair and denial have few supporters). Looking seriously at science, how it's done and where it could be going is a big part of what's underway in our business. For me, reliable observation (where instruments reach far beyond the limitations of our senses) and reliable inference are the central themes. The hardest challenge is to bring normativity, in the form of rules and values, to the party, science being a purely descriptive enterprise.

jb said...

Non-inertial reference frames are not necessarily a Bad Thing. All climate, weather, and ocean models are posed in the non-inertial reference frame of a rotating Earth. The Coriolis force is much easier than working in a frame fixed in space with a rotating planet. Truth, such as it is, is not in whether the Earth orbits the Sun or vice-versa, but in the ideas of reference frames and the transformation properties of the laws of physics.

Victor Venema said...

Eli: "For a while now, Eli has been pushing the idea that science is characterized by coherence, consilience and consensus. The Red Queen can hold any number of contradictory thoughts in her mind before breakfast but that only leads to confusion and is deadly for understanding."

Susan Anderson quoted: "Feynman describes judgement in science as the skill to “pass on the accumulated wisdom, plus the wisdom that it might not be wisdom ... to teach both to accept and reject the past with a kind of balance that takes considerable skill. Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation.

There is nothing wrong with having contradictory thoughts if you do it right. You can (and better do) understand the current consensus and also have a feeling for what would have to happen for certain aspects of it to be wrong.

I would hesitate to formulate this in terms of probabilities. The probability of unknown unknowns is unknown. But everyone does make their own subjective assessment of how likely it is for unknown unknowns to turn up and topple a part of the current understanding.

Jan Galkowski said...

@Victor Venema,

Regarding "unknown unknowns": These are unobserved out-of-sample phenomena. Contrary to the popular imagination, it's possible to get a handle on these using the sample in hand via sampling techniques (e.g., bootstrapping) or cross-validation. That's because if the present sample is worth anything at all, it should have observations near to if not from the space of the "unknown unknowns". It is just implausible to thing that the space of the "unknown unknowns" is so qualitatively different from the sample in hand that the present sample can say nothing about it.

Victor Venema said...

Jan Galkowski, that is just noise.

How large is your bootstrapped probability that the tropospheric temperature trend is twice as large as currently thought because the retrievals do not take into account unknown physical effects X, Y, and Z?

How large was your bootstrapped probability that the climate sensitivity from energy balance models is twice as large as people estimated at the time before you knew that they were using a biased sample of the observations, that not all forcings are created equally and that the climate sensitivity is not constant?

Jan Galkowski said...

@Victor Venema

There are no such things as outliers. Extreme observations can be informative about these regions. It's not like you would calculate a ``sufficient statistic'' on these and estimate what the ``true'' value of climate sensitivity was.

In short, one could take the posterior densities from an estimate like that by Schmittner, Urban, et al and take the tails very seriously.

snarkrates said...

Jan and Victor,
Whether or not a sample contains sufficient information to extrapolate depends on the parent distribution. If the sample is unimodal, it is possible to infer characteristics of the parent distribution. If it is multimodal, it depends on whether there are samples from all modes. However, even for a unimodal distribution, there may be many different forms that fit the data almost equally well but result in very different risk calculus.

Moreover, contrary to what Jan says, there are outliers that can actually mislead. Bootstrapping is a powerful tool, but it works a lot better for interpolation than for extrapolation.

Victor Venema said...

What you think your distribution is depends on the your understanding of the problem. The world is complicated, especially outside of the lab. A new understanding can lead to a new distribution.

There was a time when the UAH tropospheric temperatures were still cooling because they had not taken the drift of the satellite into account. Their error bars naturally did not take the uncertainty due to the drift into account, because they had not realised that this was a problem. In such a case the new data with drift correction can easily be outside of the original erroneous uncertainties. Bootstrapping the original dataset does not help you understand the uncertainty due to the drift.

Jan Galkowski said...

snarkrates, Victor,

I mentioned bootstrapping as a suggestion, but at least one form of bootstrapping is provably equivalent to cross-validation. (The 0.632 bootstrap.) Accordingly, as is standard practice in machine learning, both CV and this bootstrapping can be used to "train" for out-of-sample results.

As far as outliers go, it depends upon your point of view. To a frequentist, the data are random, and are judged as being consistent or inconsistent with a model. The viewpoint is captured by the title of one statistical textbook called Random Data. But to a Bayesian, like myself, the data are given and forever fixed. A datum might have a low likelihood with respect to a particular model, but, from my perspective a datum can never be "wrong", merely atypical, or may be from a measurement process which imperfectly samples the phenomenon under consideration. That could mean bad luck, or it could mean the models under consideration don't explain it. Or it could be that the likelihood does not capture the sampling process correctly, as all likelihoods should.

The possibility of multimodal distributions is one of the several primary reasons I am a Bayesian, since actual datasets are commonly multimodal, like it or not, and realistic models of physical and other processes have likelihood functions which often are.

Kevin O'Neill said...

Jan writes re: unknown unknowns:"These are unobserved out-of-sample phenomena."

This is only a partial answer. Unknown unknowns can also be observed data that is incorrectly ascribed to the wrong cause.

Jan Galkowski said...

@Kevin O'Neill,

I'm not sure what the term cause means in this situation, and causal analysis, while no doubt having its fans, also has its limitations.

In this case, how do "causes" produce data? Data are only produced if there is a measurement of some phenomenon. And in order to have some expectation (not in the statistical sense) or explanation of the data such a measurement would produce, there is a need of a (physical) model of what would have happened.

If, by "observed data that is incorrectly ascribed to the wrong cause", along these lines, you mean model specification error, okay, but that's a thorny thicket to wade into. Not only are such errors the apparent ones, like improper functional forms, but they can happen because there are too many features and phenomena being used to explain something which has a simpler explanation plus random variantion (overfitting), and, even if they occur, they may not be detectable. In the latter case, if an "erroneous model", presumably derived from your supposition of there being "erroneous causes", produces predictions which, given a set of good measurements, are in every way consistent in their accuracy-to-predict with another model, potentially derived from "proper causes", then the two are indistinguishable. In that case, the model which has the lowest out-of-sample error (judged, say, by cross-validation) and AICc across such tests wins (see Burnham and Anderson, Model Selection and Multimodel Inference). At least it wins until more good data area in hand.

But I somehow think this isn't what you meant, and you might be barking up the kind of "cause" which has been displaced by devices like Granger causality (see also) or convergent cross-mapping.

Incidentally, it does not appear to be a well-known result, but the old denier rubbish about warming causing CO2 increases rather than the other way 'round was dispatched by the (same) team of van Nes, Scheffer, Brovkin, Lenton, Ye, Deyle, and Sugihara.

You may be interested in Sugihara, Perreti, and Munch commenting on the "true model myth" in another context.

Jan Galkowski said...

Coincidentally, I ran across a recent paper which addresses some of this. It is an example of a long-held belief I have that while some times scientific fields explicitly borrow from one another, often they independently discover related things. That's unfortunately, because people could save time and effort. I have linked the paper here, putting links to the PNAS ones below it.

The significance of Berner, et al is that it relies on purely empirical perturbation constructs for its innovative approach. This is, to my mind, the latest in a series of proposals for dealing with complex systems in equation-free or mechanistically-free manners, even if Berner, et al only do this partly.

I've mentioned the Ye, Dehle, Sugihara, and Perrettia work above, but I did not mention approximate Bayesian computation which, to a first order, is about calculating posterior densities or their modes without using an explicit likelihood function.

Kevin O'Neill said...

Jan G. writes: "In the latter case, if an "erroneous model", presumably derived from your supposition of there being "erroneous causes", produces predictions which, given a set of good measurements, are in every way consistent in their accuracy-to-predict with another model, potentially derived from "proper causes", then the two are indistinguishable."

They are indistinguishable to a statistician and/or a machine learning program crunching equations, but this is just 'curve fitting' and does nothing to further our understanding. There are an infinite number of spurious correlations to any given set of data. Just as there are an infinite number of mathematical functions from which to choose.

Machine learning *can* help further our understanding of complex processes *if* it is used by those that understand what is physically plausible and what is not.

I use 'cause' in the very basic sense; i.e., raising a metal bar's temperature causes it to expand (assuming the CTE is positive). We can measure it's length over the course of a working day and perhaps find that its change in length is almost perfectly correlated with the passage of time. Of course this would only be possible if the temperature of the bar is linearly increasing during the working day, but someone unfamiliar with thermodynamics could mistake the spurious correlation for the true causal effect.

Jan Galkowski said...

@Kevin O'Neill writes, in part: ``Of course this would only be possible if the temperature of the bar is linearly increasing during the working day, but someone unfamiliar with thermodynamics could mistake the spurious correlation for the true causal effect.''

They would not make that mistake if they were using convergent cross-mapping on the time series: See https://www.youtube.com/watch?v=NrFdIz-D2yM and an adaptation for short series here. Sugihara and colleagues refer to this as dynamic causation, qualifying the meaning of ``causation'' similar to how it was qualified in the case of Granger causality.

Jan Galkowski said...

It's an open question whether or not ``time'' can ever be a cause of anything, no matter how the term cause is interpreted.

EliRabett said...

The second law of thermodynamics disagrees. Of course there is always Loschmidt's paradox.

Jan Galkowski said...

@EliRabett,

Yeah, but does The Second really make time a cause? I thought that The Second, even if very much a physical law, was a consequence of random walks of physical observables, and the consequential increase in the number of realized microstates.

EliRabett said...

Jan,

The second law is often called time's arrow because without it all physical laws are reversible. There is a considerable philosophical literature on this. Thus any process which increases entropy has time's arrow as a basic cause other wise, as somebunny put it, it would appear natural for an egg to unsplatter.

Willard and Bryson might have something more to say on this but Google ain't bad.