Subjective probabilities: point/counterpoint

• Adam Elga:  “Subjective Probabilities Should Be Sharp” — at least for rational agents, who are vulnerable to a kind of Dutch Book attack if they insist that there are observable hypotheses whose probability can not be specified as a real number.
• Cosma Shalizi:  “On the certainty of the Bayesian Fortune-Teller” — People shouldn’t call themselves Bayesians unless they’re committed to the view that all observable hypotheses have sharp probabilities — even if they present their views in some hierarchical way “the probability that the probability is p is f(p)” you can obtain whatever expected value you want by integrating over the distribution.  On the other hand, if you reject this view, you are not really a Bayesian and you are probably vulnerable to Dutch Book as in Elga, but Shalizi is at ease with both of these outcomes.

The Challenger disaster was not caused by Russian roulette

It doesn’t take much imagination to see how risk homeostasis applies to NASA and the space shuttle. In one frequently quoted phrase, Richard Feynman, the Nobel Prize- winning physicist who served on the Challenger commission, said that at NASA decision-making was “a kind of Russian roulette.” When the O-rings began to have problems and nothing happened, the agency began to believe that “the risk is no longer so high for the next flights,” Feynman said, and that “we can lower our standards a little bit because we got away with it last time.” But fixing the O-rings doesn’t mean that this kind of risk-taking stops. There are six whole volumes of shuttle components that are deemed by NASA to be as risky as O-rings. It is entirely possible that better O-rings just give NASA the confidence to play Russian roulette with something else.

If this is really what Feynman said, wasn’t he wrong?  In Russian roulette, you know there’s one bullet in the gun.  The chance of a catastrophe is just one in six the first time you put the gun to your head; but if you survive the first try, you know the round is in one of the remaining five chambers and the chance of death next time you pull the trigger climbs to 20%.  The longer you play, the more likely disaster becomes.

But what if you don’t know how many chambers are loaded?  Suppose you play “Bayes Roulette,” in which the number of bullets is equally likely to be anywhere from 1 to 6.  Then the chance of survival on the first try is (5/6) if there’s 1 bullet in the cylinder, (4/6) if 2 bullets, and so on, for a total of

(1/6)(5/6) + (1/6)(4/6) + … (1/6)(0/6) = 5/12

which is about 41%.  Pretty bad.  But let’s say you pull the trigger once and live.  Now by Bayes’ theorem, the chance that there’s 1 bullet in the cylinder is

Pr(1 bullet in cylinder and I survived the first try) / P(I survived the first try)

or

(5/36)/(5/12) = 1/3.

Similarly, the chance that there are 5 bullets in the cylinder is

(1/36)/(5/12) = 1/15.

And the chance that there were 6 bullets in the cylinder is 0, because if there had been, well, you would be a former Bayesian.

All in all, your chance of surviving the next shot is

(5/15)*(4/5) + (4/15)*(3/5) + (3/15)*(2/5) + (2/15)*(1/5) + (1/15)*0= 8/15.

In other words, once you survive the first try, you’re more likely, not less, to survive the next one; because you’ve increased the odds that the gun is mostly empty.

Or suppose the gun is either fully loaded or empty, but you don’t know which.  The first time you pull the trigger, you have no idea what your odds of death are.  But the second time, you know you’re completely safe.

I think the space shuttle are a lot more like Bayes Roulette than Russian Roulette.  You don’t know how likely an O-ring failure is to cause a crash, just as you don’t know how many bullets are in the gun.  And if the O-rings fail now and then, with no adverse consequences, you are in principle perfectly justified in worrying less about O-rings.  If you shoot yourself four times and no bullet comes out, you ought to be getting more confident the gun is empty.

Prosecutor’s fallacy — now with less fallaciousness!

I gave a talk at East HS yesterday about “not getting fooled by probability,” in which I talked about the Diana Sylvester murder case, discussed previously on the blog as an example of the prosecutor’s fallacy.  While getting ready for the talk I came across this post about the case by Penn State law professor David Kaye, which explains how, in this case, the calculation proposed by the defense was wrong in its own way.

Here’s the setup.  You’ve got DNA found at the scene of the crime, with enough intact markers that the chance of a random person matching the DNA is 1 in 1.1 million.  You’ve got a database of 300,000 convicted violent criminals whose DNA you have on file.  Out of these 300,000, you find one guy — otherwise unconnected with the crime — who matches on all six.  This guy gets arrested and goes to trial.  What should you tell the jury about the probability that the defendant is guilty?

Bayes, Sober, dice, intelligent design

I’m teaching Bayes’ theorem this week in my discrete math course, and that reminds me of an interesting puzzle related to the “argument by design” for God’s existence. The argument goes something like this: the probability that the universe would, by pure chance, have the physical constants “fine-tuned” in such a way as to allow intelligent life is spectacularly small. The probability that God would create the universe in this way, though, seems pretty high. So, according to Bayes, whatever prior degree of belief we might have in the existence of God should be much amplified by the fact that the universe is so hospitable to human life.

Objection to this argument: if the physical constants of the universe weren’t fine-tuned to permit our existence, we wouldn’t be here to notice! So the observation that the constants are fine-tuned carries no information, and shouldn’t be allowed to affect our beliefs.

Objection to the objection: Then suppose you were blindfolded in front of a firing squad, you hear twenty shots ring out, and you find yourself alive and unharmed. Quite naturally, you’re drawn to the conclusion that the firing squad must have missed you on purpose. Now a philosopher wanders by and objects: “But if you’d been killed, you wouldn’t be here to make that observation, so the fact that you survived carries no information and shouldn’t affect your beliefs about the intentions of the firing squad!”

At this point your confidence in philosophers would be shaken.

Elliot Sober handles this version of the argument by design, along with many others, and their corresponding objections and counter-objections, in a very thorough and clearly-written paper (.pdf file). So rather than try to unravel this knot in a blog post, I’ll give you one more puzzle.
Suppose you roll a die 20 times and get

6-4-1-5-1-2-1-3-3-1-6-2-4-1-5-1-3-2-4-5

A person sitting next to you now pipes up and says, “Well, there you have it, very strong evidence of the existence of God.”

You: “How so?”

Person: “Any God I can conceive of would certainly have arranged for those dice to fall 6-4-1-5-1-2-1-3-3-1-6-2-4-1-5-1-3-2-4-5. So the probability of that outcome, conditional on God’s existence, is 1, while the probability conditional on God’s nonexistence is 6^(-20). So you and I both have to drastically increase our degree of belief that God exists.”

How similar is this to the argument by design for God’s existence? To the firing squad argument that the shooters must have missed on purpose? Which of the three arguments are right and which are wrong?

Human(itie)s, aliens, and autism: Ian Hacking and Elliot Sober at Fluno Center tomorrow

Humanities at Wisconsin are said to be underfunded and demoralized, but you’d never know it from the excellent “What is Human?” symposium the Center for Humanities is holding tomorrow at Fluno Center. At 1:45, Ian Hacking will speak on “Humans, aliens, and autism” — perhaps he’ll expand on some of the material in this 2006 essay from the LRB. Hacking’s two books on the development of probability theory, The Emergence of Probability and The Taming of Chance, are probably the best I’ve read on the history of mathematics; to stay bound to the theme of this post, he is one of the only people writing really humanely about mathematical practice. (The late Thomas Tymoczko was another.)

Speaking at 11:15 is our own Elliot Sober, who is that most powerful of creatures, a philosopher who knows Bayes’ Theorem. (See also: Adam Elga, K. Anthony Appiah.) Sober’s title is TBA, but he may well talk about (or, more likely, against) the “design argument” against Darwinism. (He’s definitely giving a talk on that subject at 7:30 this Thursday night, in 1315 Chemistry.) A very vulgar version of the design argument looks like this. The probability that intelligent life would arise, if there were no divine guidance, is nonzero but spectacularly small. The probability that intelligent life would arise, if a divine being created it, is 1. Now Bayes says you should think that divine origin of human life is very likely, even if it was very unlikely in your prior. Sober’s new book, Evidence and Evolution, takes on the design argument and its many more sophisticated variants, and more generally tries to work out what we mean by “evidence” about the origins of life. Bayes flies everywhere.

Sober is also credited with the following joke:

A boy is about to go on his first date, and is nervous about what to talk about. He asks his father for advice. The father replies: “My son, there are three subjects that always work. These are food, family, and philosophy.”

The boy picks up his date and they go to a soda fountain. Ice cream sodas in front of them, they stare at each other for a long time, as the boy’s nervousness builds. He remembers his father’s advice, and chooses the first topic. He asks the girl: “Do you like potato pancakes?” She says “No,” and the silence returns.

After a few more uncomfortable minutes, the boy thinks of his father’s suggestion and turns to the second item on the list. He asks, “Do you have a brother?” Again, the girl says “No” and there is silence once again.

The boy then plays his last card. He thinks of his father’s advice and asks the girl the following question: “If you had a brother, would he like potato pancakes?”

(This philosophy joke, along with many others, appears here.)

A better Bayesian puzzle

Actually, the example of Bayesian reasoning in the post below doesn’t really give the flavor of Tenenbaum’s work. Here’s something a little closer. Abe and Becky each roll a die 12 times. Abe’s rolls come up

1,2,3,4,5,6,1,2,3,4,5,6.

Becky gets

2,3,3,1,6,2,5,2,4,4,1,6.

Both of these results occurs with probability (1/6)^12, or about one in two billion. So why is it that Abe finds his result startling, while Becky doesn’t, when the two outcomes are equally unlikely?

Extra credit: what does this have to do with arguments about intelligent design?

(If you like the extra credit question, you might want to read my colleague Elliot Sober’s papers on the topic, or even buy his book!)

Tagged , ,

Get Bayesian with Josh Tenenbaum, May 8

Our friend Josh Tenenbaum, a psychologist at M.I.T., is in town to plenarize this Thursday at 3pm at the First Annual UW Cognitive Science Conference. He hasn’t posted a title, but he’ll be talking about his research on Bayesian models of cognition. What makes a model “Bayesian” is a close attention to a priori probabilities, usually called priors.

For instance: suppose you’re an infant, trying to figure out how language works. You notice that when you wake up, your father points out the window and says “Hello, sun!” Pretty soon you figure out that the bright light he’s pointing at is called “sun.” But the evidence presented to you is just as consistent with the theory that, when you’ve just woken up, the word “sun” refers to the bright light out the window — but before bedtime it means “the stuffed monkey on the dresser.” How, infant, do you pick the former theory over the latter? Because you have some set of priors which tells you that words are very unlikely to refer to different things at different times of day. You don’t learn this principle from experience — you start with it, which is what makes it a “prior.” According to Chomsky-style linguistics, you are born with lots of priors about language — you know, for instance, that there’s a fixed order in which the subject, object, and verb of a sentence are supposed to come. If you didn’t have all these priors, you wouldn’t have a chance of learning to talk; there’s an infinitude of theories of language, all consistent with the evidence you encounter. Your priors are what allows you to narrow that down to the one that’s vastly more likely than the sometimes-it’s-a-monkey alternatives.

I don’t think Josh is going to talk about language acquisition, but he is going to talk about the ways that lots of interesting cognitive processes can be described in Bayesian terms, and how to check empirically that these Bayesian descriptions are accurate. Recommended to anyone who likes mathy thinking about thinking.