The coin game

Here is a puzzling example due to Roger White.

There are two coins.  Coin 1 you know is fair.  Coin 2 you know nothing about; it falls heads with some probability p, but you have no information about what p is.

Both coins are flipped by an experimenter in another room, who tells you that the two coins agreed (i.e. both were heads or both tails.)

What do you now know about Pr(Coin 1 landed heads) and Pr(Coin 2 landed heads)?

(Note:  as is usual in analytic philosophy, whether or not this is puzzling is itself somewhat controversial, but I think it’s puzzling!)

Update: Lots of people seem to not find this at all puzzling, so let me add this. If your answer is “I know nothing about the probability that coin 1 landed heads, it’s some unknown quantity p that agrees with the unknown parameter governing coin 2,” you should ask yourself: is it strange that someone flipped a fair coin in another room and you don’t know what the probability is that it landed heads?”

Relevant readings: section 3.1 of the Stanford Encyclopedia of Philosophy article on imprecise probabilities and Joyce’s paper on imprecise credences, pp.13-14.

Tagged , ,

10 thoughts on “The coin game

  1. JSE says:

    Actually, though I ascribe this to White, I think it’s actually quite a bit older, arising in papers from the 1990s by Walley, Seidenfeld, and Wasserman.

  2. alolodudu says:

    I didn’t really get it. Could you explain maybe in more detail what is going on ? I mean it seems to me that both probabilities are p and that that is very natural. So I deduced that I have probably misunderstood or miscalculated something.

    Also I couldn’t find that problem in the link you gave but I didn’t read the whole document carefuly so maybe I missed it. Can you tell us (me) on which page to look ?

  3. Since the coins will agree 50% of the time, why should a single toss change whatever prior we have on $p$?

  4. Consider the limit p → 1 (the unfair coin ALWAYS lands heads). In this case, it is obvious that being told the coins agree tells us WITH CERTAINTY that the fair coin landed heads.

    Conversely, in the limit p → 0, we can be CERTAIN that agreement means the fair coin landed tails.

    If p = 1/2, it’s not hard to see that agreement tells us nothing, so our estimate that the fair coin landed heads is unchanged from its prior value of 1/2.

    So our posterior probability that the fair coin landed heads is a function f(p) which interpolates between f(0)=0 and f(1)=1, with f(1/2)=1/2. A simple calculation using Bayes Theorem yields f(p)=p. What would be SURPRISING would be if f(p) turned out to be something more complicated than a simple linear interpolation.

  5. Qiaochu Yuan says:

    Re: the update, no, I don’t think that’s strange. You gave me some weird information and I conditioned on it. Conditioning on things changes my subjective probabilities, and conditioning on weird things changes my subjective probabilities in weird ways.

  6. There are two interpretations of the question. One simply asks us to compute the probability of heads given agreement; the answer then is p and doesn’t seem surprising. The other asks what the experiment tells us about p.

  7. Re: update. Why should it be surprising that the conditional probability of an event is different from its a-priori probability?

  8. […] answers to the last question! I think I perhaps put my thumb on the scale too much by naming a variable […]

  9. JSE says:

    Another comment sent to me by a friend on email:

    Guess I am with Laplace. I think the probability of heads for coin 1
    is 1/2 in the flip you told us about and in any other flip. This is
    the only objective probability we’re given by the setup. Before the
    toss I would assign equal credence to coin 2 coming up heads or tails
    as you stipulated I didn’t know anything about p. Even if you had said
    “coin two falls heads with probability p^2” I would still give equal
    credence to heads or tails because you said I truly did not know
    anything about p. It seems to me with no information on p it doesn’t
    even matter that it is a coin you can flip more than once! And of
    course after learning the result I would still give equal credence to
    heads or tails for coin 2. No update necessary.

    If coin 1 instead was a coin with a given known probability 1/4 of
    coming up heads, then, I would update: after knowing the toss came up
    equal my credence in heads for coin 2 in that particular toss would
    have decreased.

    In both cases, I am not making a mathematical statement, it is just
    something about how I think and you should feel free to disagree with
    me. If you do disagree I hope we can meet and have a betting game
    based on this sometime (although mysterious unknown real numbers p
    about which we know we don’t know anything are hard to come by in
    practice).

    Sure, it seems very easy to misuse POI (as is shown for the example
    for the square in the thing you linked to — in fact in the above
    example we would run into exactly this problem if we tried to
    partition the space [0, 1] that the p’s lie in), but maybe this is one
    of those cases where it works fine.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: