Now Eli has on occasion been a betting bunny and he has a long term one going with one Blaise Pascal, one that sooner than the Bunny would enjoy is going to pay off or not. Still it has been a good time so far but it does pay to consult the experts and Rabett Run has called in a philosopher, a well known racehorse and investment company, all going by the name of Kelso to help cook the argument, and so it goes
Pascal’s wager makes a famous argument for motivated cognition. While there is a good reply that doesn’t address this aspect of the wager, a response that goes straight for the jugular is more illuminating, providing a wider lesson about the relation between beliefs, preferences and rational choice. The wager is also a great example of how short, vivid and easily taught arguments in philosophy can have much broader implications.
Pascal asks non-believers to consider a choice between two options: believing in God and disbelieving. He assumes that belief has a modest net cost if God does not exist, since belief requires at least some sacrifice of time and effort. But disbelief has an infinite net cost if God does exist: the loss of a blissful eternity in heaven. The upshot is clear: at any odds, the expected value of believing will exceed the expected value of disbelief.
Pascal understands that we can’t just choose to believe—so the real choice of the wager is between trying to become a believer and not trying. Pascal recommended going through the motions of religious belief, making religious ritual and engagement a regular part of life, in the hope that belief will follow. The first reply takes advantage of the other side of this challenge: we can’t choose once and for all not to believe, either. No matter how committed you may be to your agnosticism or atheism, you just might have a sudden conversion. But adding this possibility to the calculation balances the scales—both alternatives now provide a finite chance of an infinite return: the expected values are equal after all, so Pascal’s argument fails.
But the argument’s focus on belief in God is a distraction, bringing in a lot of background noise. Many do believe in a God who rewards believers and punishes unbelievers. Others reject the suggestion that a good God, if she exists, would be so intent on primping in the mirror of believers’ faith as to enact such a policy. But this back-and-forth misses the real point. There’s a general puzzle here that has nothing to do with theism. Suppose the required belief was any belief we have no evidence for, such that having the belief at the end of your life would be infinitely rewarded if it were true. What belief might that be? Here’s a template:
B: If B is true and I believe B at the time of my death, then I will be infinitely rewarded.
Two questions come up for any belief like this:
- Should you try to acquire the belief?
- If you decide to try, how do you go about it?
The reply given above points out that there’s no guarantee that you will fail to have the belief if you don’t try. This equalizes the expected values, so there’s no reason to try.
But I prefer a second reply: if beliefs aren't based on evidence, Pascal's method for deciding what it's rational to do collapses, and the argument fails again, but in a more general and illuminating way. This reply has more heft: it targets the legitimacy of motivated beliefs, drawing on Pascal’s own model of rational choice to argue that choosing beliefs using Pascal’s wager-type arguments undermines the rationality of the appeal to expected values.
Rational choice in gambling combines beliefs about the probability of various outcomes given each alternative action with valuations of those outcomes to determine which action to choose. Like other early probability theorists, Pascal could calculate probabilities in games of chance when most professional gamblers couldn’t. But if we choose our beliefs based on whether we think having those beliefs will lead to better outcomes, Pascal’s method becomes an ouroboros: choices like that break the link between the beliefs we adopt and the actual probabilities/reliability of those beliefs. If our choices aren’t based on reliable judgements of probability, they can’t do the job Pascal’s account of rational choice needs them to do.
It’s the job of beliefs to be true and of probability assignments to reliably reflect ratios of outcomes in like cases. Dodging philosophical worries about ‘truth’ and focusing exclusively on the pragmatics we can say that to be useful, beliefs need to be a reliable basis for expectations about the consequences of our choices. (Similarly, it’s the business of evaluations of outcomes to reflect their real value to us.) When these conditions are met and we apply Pascal’s method our choices will be good ones, though of course we can still be unlucky. When the conditions aren’t met (think of professional gamblers fleeced by early probability theorists (probably frequentists - ER) or someone actively seeking an outcome they later regret) our choices are bad even if we’re lucky and things to turn out well. So like motivated cognition in general, denialism is a recipe for bad outcomes, well-earned: flying on a wing and a prayer may sound like fun, but it’s not likely to pay off.
This raises an obvious question: if the rationality (reliability) of beliefs is essential to making good choices, why do so many people reason poorly and have irrational beliefs? It doesn’t require a trip into real issues to show this (though those issues are what we’re really after here)—Daniel Kahneman and Amos Tversky showed long ago that people make similar mistakes in very simple cases (a story told elegantly in Kahneman’s Thinking Fast, Thinking Slow). Their suggestion is that our psychology combines a capacity for careful, reliable reasoning with a quicker, more reflexive system that ‘cuts to the chase’ but often gets things wrong. This makes a lot of sense: rationality is a lot of work (consider Kepler’s travails in calculating by hand how well Brahe’s observations fit with the hypothesis of elliptical orbits obeying his three laws). Sense perception (a more basic evolutionary heritage) is quicker and easier—and so is guessing. Even though it can be misleading, sometimes making quick calls is more important than consistently making the right call.
The point is that rationality is not something nature built into us, but a difficult, piecemeal, always-incomplete accomplishment. It demands that we think hard, apply critical reflection and evaluate our reasoning carefully rather than leap to conclusions. These habits don’t come without effort, and even once we’ve learned them, it can be hard to resist jumping to conclusions. Science is built on conclusions that have been tested carefully to ensure they provide a reliable basis for evaluating both new information and the possible consequences of our actions. There’s no guarantee that it’s always right. But any conclusion that survives scientific examination has shown itself to be reliable in a range of applications and circumstances. And (at least for a pragmatist) that’s about the best we can expect.