## Holden Karnovsky on the perils of expected utility

I asked a while back how seriously we should take expected utility computations that rely on multiplying very large utilities by very small probabilities.  This kind of computation makes me anxious.  Holden Karnovsky of GiveWell agrees, arguing that we are constrained by some kind of informal Bayesianness not to place too much weight on such computations, especially when the probability computation is one that can’t really be quantitatively well-grounded.  Should you give fifty bucks to an NGO that does malaria prevention in Africa?  Or should you donate it to a group that’s working on ways to deflect asteroids on a collision course with the Earth?  The former donation has a substantial probability of helping a single person or family in a reasonably serious way (medium probability of medium utility.)  The latter donation is attached to the very, very large utility of saving the human race from being wiped out; on the other hand, the probability of achieving this utility is some combination of the chance that a humanity-killing asteroid will be on course to strike the earth in the near term, and the chance that the people asking for your money actually have some prospect of success.  You can make your best guess as to the extent to which your fifty dollars decreases the chance of global extinction; and you might find, on this ground, that the expected value of the asteroid contribution is greater than that of the malaria contribution.  Karnovsky says you should still go with malaria.  I’m inclined to think he’s right.  One reason:  a strong commitment to expected utility makes you vulnerable to Pascal’s Mugging.

## Reader survey: how seriously do you take expected utility?

Slate reposted an old piece of mine about the lottery, on the occasion of tonight’s big Mega Millions drawing.  This prompted an interesting question on Math Overflow:

I have often seen discussions of what actions to take in the context of rare events in terms of expected value. For example, if a lottery has a 1 in 100 million chance of winning, and delivers a positive expected profit, then one “should” buy that lottery ticket. Or, in a an asteroid has a 1 in 1 billion chance of hitting the Earth and thereby extinguishing all human life, then one “should” take the trouble to destroy that asteroid.

This type of reasoning troubles me.

Typically, the justification for considering expected value is based on the Law of Large Numbers, namely, if one repeatedly experiences events of this type, then with high probability the average profit will be close to the expected profit. Hence expected profit would be a good criterion for decisions about common events. However, for rare events, this type of reasoning is not valid. For example, the number of lottery tickets I will buy in my lifetime is far below the asymptotic regime of the law of large numbers.

Is there any justification for using expected value alone as a criterion in these types of rare events?

This, to me, is a hard question.  Should one always, as the rationality gang at Less Wrong likes to say, “shut up and multiply?” Or does multiplying very small probabilities by very large values inevitably yield confused and arbitrary results?

UpdateCosma Shalizi’s take on lotteries and utilities, winningly skeptical as usual.

## Slate piece on martingales, expected value, and the bailout

I have a piece in Slate today about the “martingale strategy” and how it relates to the financial crisis:

Here’s how to make money flipping a coin. Bet 100 bucks on heads. If you win, you walk away \$100 richer. If you lose, no problem; on the next flip, bet \$200 on heads, and if you win this time, take your \$100 profit and quit. If you lose, you’re down \$300 on the day; so you double down again and bet \$400. The coin can’t come up tails forever! Eventually, you’ve got to win your \$100 back.

Or not. If you want to see how well this strategy works in practice (answer: not very) try a few runs of the martingale applet at UIUC.

My colleague Timo Seppalainen explained to me a nice way of seeing of the long-term failure of the martingale. Let X_j be the length of the jth run of tails. Then X_j is 0 with probability 1/2, 1 with probability 1/4, 2 with probability 1/8, and so on. The chance that X_j >= n is 1/2^n. In particular, the probability that X_j is at least (log_2 j + 1) is about 1/2j.

But the amount of money you lose on a run of n tails is about 2^n, while the amount of money you’ve won prior to the start of the jth run is about j. In particular, if X_j > (log_2 j + 1) then you’re at least j dollars down after the jth run of tails. Since the sum of 1/2j as j goes to infinity diverges, you almost always have infinitely many occurences of X_j > (log_2 j + 1); as I learned from Timo, this follows from the second Borel-Cantelli Lemma.

So there are infinitely many j such that, after the jth run of tails, you’re at least j dollars down. Even if you start with a million dollars, that means you’re eventually going broke.