- Adam Elga: “Subjective Probabilities Should Be Sharp” — at least for rational agents, who are vulnerable to a kind of Dutch Book attack if they insist that there are observable hypotheses whose probability can not be specified as a real number.
- Cosma Shalizi: “On the certainty of the Bayesian Fortune-Teller” — People shouldn’t call themselves Bayesians unless they’re committed to the view that all observable hypotheses have sharp probabilities — even if they present their views in some hierarchical way “the probability that the probability is p is f(p)” you can obtain whatever expected value you want by integrating over the distribution. On the other hand, if you reject this view, you are not really a Bayesian and you are probably vulnerable to Dutch Book as in Elga, but Shalizi is at ease with both of these outcomes.

## Subjective probabilities: point/counterpoint

**Tagged**bayes, decisions, egla, probability, shalizi, statistics

I’m not sure I understand the argument in the left column of page 5 of Elga’s paper. While it’s clear to me that bets A and B are individually rationally optional, I don’t see how that fact allows me to conclude that a perfectly rational agent would ever reject both bets. After all, it’s rational to take the law of excluded middle for granted in these scenarios, so concluding that I would never reject both bets doesn’t depend on my P(H) at all.

Like JC, I am not convinced by Elga’s paper: a rational subject, knowing in advance both bets to be offered, and unable to assign a definite probability to whether it will rain tomorrow, will accept both bets, because it is a winner regardless of what happens and regardless of what she believes.

Furthermore, even if Elga’s argument is correct (or if there is some other argument that does the job) then he has not refuted the informal arguments for vague degrees of belief with which he introduces his paper. His argument attempts to show that “perfect rationality requires one to have sharp degrees of belief.” I am prepared to accept this: I have no difficulty with the notion that strict rationality is only valid where all relevant propositions have definite probabilities, and is therefore of limited use in the real world.

That introduction is something of a come-on, because when Elga introduces ‘perfect rationality’ in his ‘cautious’ version of the question, he changes the issue, and it would be a mistake to think his alleged conclusion necessarily has any relevance to the issue of uncertainty in real-world estimating.

My reading of the paper is that (section 4) there are two bets, A and B, such that one might reasonably reject an offer of A and then, being offered B might reasonably reject that too, whereas if one had been offered them as a pair one should rationally accept them.

It is perfectly true that many formulations of imprecise probability yield irrational results in this case, and this is an important observation. But then imprecise probabilities are … imprecise! Boole and Keynes, for example (who seem to have invented imprecise probabilities) deal with this appropriately.

See http://djmarsay.wordpress.com/bibliography/booles-laws-of-thought/ and http://djmarsay.wordpress.com/bibliography/keynes-treatise-on-probability/ . Cheers.

I haven’t had the chance to read Adam’s paper in detail, but I’d be very surprised if he wasn’t right. If you accept Savage’s axioms about what counts as a “rational agent”, then rational agents must be terrified of Dutch Books, and the only way to avoid them is to be a Bayesian agent, which implies a real-numbered value for any observable hypothesis. Those are standard decision-theoretic results. (Though there

arewrinkles, as in the Kyburg et al. paper.) At this point, however, the question becomes why one would want to accept those axioms, especially when they lead to such ridiculous conclusions. But I am just going to start repeating myself if I continue in this direction, so I’ll stop.Shorter me: I don’t think Adam and I disagree about the logic, just what conclusions to draw from it.

(ObSmallWorld: Adam was my brother’s roommate at a neuroscience/cognitive science training program here in Pittsburgh in 1995 or 1996 — at least I presume it’s the same philosophically-inclined Adam Elga.)

I’m on the side of those who find Adam Elga’s argument trivially wrong. Why can’t one have unsharp views about the probability of A and B but an entirely sharp view about the probability of A or B? An unsharpist would say that that holds in his case: we don’t have a precise view about the probability that it will rain in 36 days’ time, but we certainly have a very sharp perception of the probability that either it will rain or it won’t.

If his argument is a counterexample to various rules that have been proposed for how rational people behave in the face of unsharp probabilities, then so much the worse for those rules. To that extent, his paper may be interesting.

I am inclined to believe that the narrow claim Elga calls SHARP is correct, within the limits of his axioms, but it would not prove that his argument for it is valid.

Elga goes to some lengths to state that the subject knows all about both bets before they are offered and is in no doubt that the second will be offered once the first is: e.g. ‘For rejecting both bets is worse for you, no matter what, than accepting both bets. *And you can see that in advance*. So *no matter what you think about H*, it doesn’t make sense to reject both bets.’ (my emphasis.) Following that, however (starting with the last paragraph on page 4) he analyzes the situation as if the subject decides on the first bet without any knowledge of the terms of the second. It is clear that, if this were the case, a rational agent with any precise estimate of P(H) will accept at least one bet, while an agent attempting to be rational but having no opinion on P(H) may reject both. It is this last case that Elga calls absurd, but it is only absurd if the subject knows the terms of both bets before deciding on either, which is presumably why Elga insists (everywhere else) on this being known by the subject beforehand.

By the time we get to the last paragraph of section 5, Elga is once again stating that the subject knows in advance and with certainty the terms of the bets: ‘Keep in mind that this agent cares only about money (her utility scale is linear), that she is certain in advance what bets will be offered, and that she is informed in advance that her state of opinion on the bet proposition will remain absolutely unchanged throughout the process.’ I think he owes us an explanation of why a rational but completely uncertain agent will not accept both bets on the grounds given in the first quote above.

I don’t see in the section you cite any indication that Elga has dropped the hypothesis that the subject knows the terms of both bets in advance. My understanding is that the hypothesis is in force throughout Elga’s paper.

Also see http://djmarsay.wordpress.com/bibliography/binmores-rational-decisions/ . This notes Savage’s notion that Bayesianism is only appropriate in a ‘small’ world, and develops an imprecise solution for muddled worlds. Are we muddled?

Yes. As I note above, Savage’s axioms are for ‘small worlds’. As the label suggests, these seem quite rare. He challenges us to develop an approach for larger worlds.

Elga considers sharp and very imprecise probabilities, but not Boole and Keynes style under-determined algebraic probabilities. If we regard these as imprecise then his argument has a huge hole in it. It is in any case very misleading.

What about Boole and Keynes’ approach?

Let P(H)=p, unconstrained. This is not sharp, and is normally called imprecise.

The expected return from A is $15.(1-p) + (-$10).p, from B is $10.p + (-$15).(1-p). If you need all your cash for the bus home you might reasonably reject both. But the expected return from both is $5, so you ‘should’ accept it.

Elga says “Finally, it is natural to conclude that a perfectly rational agent may reject both bets.” Is this an argument?

Read as a literature, the hypothesis is in force. But at crucial points it is ignored. In any case his argument is that ‘it is natural to conclude that ..’ in which case it seems natural for us to suppose that the huge gap that this leaves in his logic may be fatal to his conclusion.

Dave Marsay makes a succinct case for moving on, but here’s the point: in the paragraph on page five beginning ‘But what about you?’ Elga discusses what a rational but unsharp agent would decide to do about the first bet. That discussion, however, is conducted without regard to the fact that the agent knows about the second bet. Given that knowledge, a rational agent having no opinion on P(H) will take the bet and plan to take the second too, for the reason Elga gave in the first of his sentences that I quoted above. That argument shows that in any case where the agent doesn’t have a reason to take just one of the bets in preference to both, she will take both.

I happened to stumble upon my copy of this paper again and found via a google search that Elba has retracted some of the claims he made in this paper: http://www.princeton.edu/~adame/papers/sharp/sharp-errata.pdf