Actually, the example of Bayesian reasoning in the post below doesn’t really give the flavor of Tenenbaum’s work. Here’s something a little closer. Abe and Becky each roll a die 12 times. Abe’s rolls come up
Both of these results occurs with probability (1/6)^12, or about one in two billion. So why is it that Abe finds his result startling, while Becky doesn’t, when the two outcomes are equally unlikely?
Extra credit: what does this have to do with arguments about intelligent design?
(If you like the extra credit question, you might want to read my colleague Elliot Sober’s papers on the topic, or even buy his book!)
Reminds me of DnD:
Player 1: Woah! I just rolled two 20s in a row! That’s a 1 in 400 chance!
Player 2: Woah! I just rolled a 7, and then a 12! That’s also a 1 in 400 chance…
In Mega Millions, in the last three drawings the MegaBall has been 26. Yet the odds of the same number occuring three times in a row is less than 1:5,000. Yet, everyone needed to know just how unlikely this was… from me… today.
[…] pointed to maximal entropy methods, still I would like to find the relation… Ah, and see this nice post I found. Published […]
What is the probability of this event? I just wrote
an entry in my blog about Bayesian probability and the temperature of a single configuration in statistical mechanics. And then, following some links from the physicsworld newswire, I reach here… :)
Why Abe ought to be startled: Abe’s result is actually far more likely than Becky’s, because it is attributable to cheating.
There’s another very reasonable sense in which the first pattern is quite a bit more surprising: the _expected_ number of throws one must wait before the occurrence of Abe’s pattern is 2176828992, while it is “only” 362797056 for Becky’s pattern (i.e.,on average A would appear — consecutively — in an infinite sequence of throws only after waiting about 6 times as long as one needs to wait for B); see
“A Martingale Approach to the Study of Occurrence of Sequence Patterns in Repeated Experiments”, Shuo-Yen Robert Li, The Annals of Probability, Vol. 8, No. 6 (Dec., 1980), pp. 1171-1176
and more specifically Lemma 2.4 which explains how to make this type of computations (for much more general situations).
Read about Kolmogorov complexity and you’ll get great answers to your questions!
From all the possible outcomes, there is a small set of “significant” outcomes for most numerically literate humans. Abe got such a “significant” outcome, which is highly unlikely. Of course, “significance” is an arbitrary label, but what matters is that it defines a small set. No idea about ID.
Greg’s answer is of course correct, but I think it wants a little generalizing and expanding-on. Abe’s result is more explicable by cheating, supernatural intervention, hallucination, and probably a whole host of other highly improbable but *interesting* causes. And, precisely because 123456123456 (like 233162524416) is highly improbable as a purely random outcome, the posterior probability of one of those unlikely-but-interesting outcomes might actually be high enough to start taking notice. Which is why we’re startled, and why we ought to be: Abe’s result is much better evidence of something-interesting than Becky’s, because of its relatively high probability given something-interesting and not because of any difference in probability given nothing-interesting.
(Connection with intelligent design: this is the kind of thing Dembski et al are trying to get at when they talk about “specified complexity”. S.C. means high probability given an intelligent cause and low probability otherwise. The trouble is that it isn’t only intelligent agents that tend to produce long patterns of low Kolmogorov complexity, so the “low probability otherwise” clause fails: the ID people are, perhaps deliberately, adopting oversimplified models of what “otherwise” means.)