## Holden Karnovsky on the perils of expected utility

I asked a while back how seriously we should take expected utility computations that rely on multiplying very large utilities by very small probabilities.  This kind of computation makes me anxious.  Holden Karnovsky of GiveWell agrees, arguing that we are constrained by some kind of informal Bayesianness not to place too much weight on such computations, especially when the probability computation is one that can’t really be quantitatively well-grounded.  Should you give fifty bucks to an NGO that does malaria prevention in Africa?  Or should you donate it to a group that’s working on ways to deflect asteroids on a collision course with the Earth?  The former donation has a substantial probability of helping a single person or family in a reasonably serious way (medium probability of medium utility.)  The latter donation is attached to the very, very large utility of saving the human race from being wiped out; on the other hand, the probability of achieving this utility is some combination of the chance that a humanity-killing asteroid will be on course to strike the earth in the near term, and the chance that the people asking for your money actually have some prospect of success.  You can make your best guess as to the extent to which your fifty dollars decreases the chance of global extinction; and you might find, on this ground, that the expected value of the asteroid contribution is greater than that of the malaria contribution.  Karnovsky says you should still go with malaria.  I’m inclined to think he’s right.  One reason:  a strong commitment to expected utility makes you vulnerable to Pascal’s Mugging.

## Reader survey: how seriously do you take expected utility?

Slate reposted an old piece of mine about the lottery, on the occasion of tonight’s big Mega Millions drawing.  This prompted an interesting question on Math Overflow:

I have often seen discussions of what actions to take in the context of rare events in terms of expected value. For example, if a lottery has a 1 in 100 million chance of winning, and delivers a positive expected profit, then one “should” buy that lottery ticket. Or, in a an asteroid has a 1 in 1 billion chance of hitting the Earth and thereby extinguishing all human life, then one “should” take the trouble to destroy that asteroid.

This type of reasoning troubles me.

Typically, the justification for considering expected value is based on the Law of Large Numbers, namely, if one repeatedly experiences events of this type, then with high probability the average profit will be close to the expected profit. Hence expected profit would be a good criterion for decisions about common events. However, for rare events, this type of reasoning is not valid. For example, the number of lottery tickets I will buy in my lifetime is far below the asymptotic regime of the law of large numbers.

Is there any justification for using expected value alone as a criterion in these types of rare events?

This, to me, is a hard question.  Should one always, as the rationality gang at Less Wrong likes to say, “shut up and multiply?” Or does multiplying very small probabilities by very large values inevitably yield confused and arbitrary results?

UpdateCosma Shalizi’s take on lotteries and utilities, winningly skeptical as usual.