Tag Archives: utility

Holden Karnovsky on the perils of expected utility

I asked a while back how seriously we should take expected utility computations that rely on multiplying very large utilities by very small probabilities.  This kind of computation makes me anxious.  Holden Karnovsky of GiveWell agrees, arguing that we are constrained by some kind of informal Bayesianness not to place too much weight on such computations, especially when the probability computation is one that can’t really be quantitatively well-grounded.  Should you give fifty bucks to an NGO that does malaria prevention in Africa?  Or should you donate it to a group that’s working on ways to deflect asteroids on a collision course with the Earth?  The former donation has a substantial probability of helping a single person or family in a reasonably serious way (medium probability of medium utility.)  The latter donation is attached to the very, very large utility of saving the human race from being wiped out; on the other hand, the probability of achieving this utility is some combination of the chance that a humanity-killing asteroid will be on course to strike the earth in the near term, and the chance that the people asking for your money actually have some prospect of success.  You can make your best guess as to the extent to which your fifty dollars decreases the chance of global extinction; and you might find, on this ground, that the expected value of the asteroid contribution is greater than that of the malaria contribution.  Karnovsky says you should still go with malaria.  I’m inclined to think he’s right.  One reason:  a strong commitment to expected utility makes you vulnerable to Pascal’s Mugging.

 

 

Tagged , , , ,

Reader survey: how seriously do you take expected utility?

Slate reposted an old piece of mine about the lottery, on the occasion of tonight’s big Mega Millions drawing.  This prompted an interesting question on Math Overflow:

I have often seen discussions of what actions to take in the context of rare events in terms of expected value. For example, if a lottery has a 1 in 100 million chance of winning, and delivers a positive expected profit, then one “should” buy that lottery ticket. Or, in a an asteroid has a 1 in 1 billion chance of hitting the Earth and thereby extinguishing all human life, then one “should” take the trouble to destroy that asteroid.

This type of reasoning troubles me.

Typically, the justification for considering expected value is based on the Law of Large Numbers, namely, if one repeatedly experiences events of this type, then with high probability the average profit will be close to the expected profit. Hence expected profit would be a good criterion for decisions about common events. However, for rare events, this type of reasoning is not valid. For example, the number of lottery tickets I will buy in my lifetime is far below the asymptotic regime of the law of large numbers.

Is there any justification for using expected value alone as a criterion in these types of rare events?

This, to me, is a hard question.  Should one always, as the rationality gang at Less Wrong likes to say, “shut up and multiply?” Or does multiplying very small probabilities by very large values inevitably yield confused and arbitrary results?

UpdateCosma Shalizi’s take on lotteries and utilities, winningly skeptical as usual.

Tagged , , , , , ,

I overrate expensive wine and I will not apologize

A recent study shows that most people rate wine as tastier if it has a fancy label. Jonah Lehrer, from his forthcoming book How We Decide, writes:

Twenty people sampled five Cabernet Sauvignons that were distinguished solely by their retail price, with bottles ranging from $5 to $90. Although the people were told that all five wines were different, the scientists weren’t telling the truth: there were only three different wines. This meant that the same wines would often reappear, but with different price labels. For example, the first wine offered during the tasting – it was a cheap bottle of Californian Cabernet – was labeled both as a $5 wine (it’s actual retail price) and as a $45 dollar wine, a 900 percent markup…. Not surprisingly, the subjects consistently reported that the more expensive wines tasted better. They preferred the $90 bottle to the $10 bottle, and thought the $45 Cabernet was far superior to the $5 plonk.

Of course, the wine preferences of the subjects were clearly nonsensical. Instead of acting like rational agents – getting the most utility for the lowest possible price – they were choosing to spend more money for an identical product.

I think the wine preferences of the subjects were clearly not nonsensical. Maybe an unlabelled $40 bottle of wine tastes no better than a $5 unlabelled bottle of wine. But that’s why people don’t buy unlabelled bottles of wine! The utility of the wine you drink isn’t contained in the molecules striking your tongue and your nose; you’re enjoying the possession of something people have agreed to value. When you travel three hours to eat the best barbecue in Texas, the long drive and the long wait are part of what you’re paying for. If you think that’s nonsensical, you’ve got problems with people’s behavior that go way past their selections from the wine list.

Note also: subjects with an expertise in wine did recognize, and prefer, the pricier wines. So consider the following experiment: give a heterogeneous group of readers a selection of novels by Tom Clancy and F. Scott Fitzgerald, with the covers torn off. You might find that the 14-year-olds in the group rated the two groups of novels equally, while those with an expertise in literature preferred the Fitzgerald, even without the identification. Now suppose one of the 14-year-olds, with knowledge of these results, was offered the choice of a book by TC or a book by FSF for twice the price. And let’s say this 14-year-old reasons, “The experiment suggests I’ll like these books equally; but my teachers and my parents say that Fitzgerald is great literature and Tom Clancy is trash, so maybe I’d better take their word for it and try the Fitzgerald.” Is the teenager’s behavior clearly nonsensical?

Or maybe the example of JT Leroy is a little less scale-thumby. People are less interested in his books now that we know the author isn’t who he claimed to be — isn’t even, in fact, a he. Same books, same sentences. Is that nonsensical?

By the way, I’m not really imputing to Lehrer the view he asserts in his book: in an earlier blog post on a similar study, he writes

What these experiments neatly demonstrate is that the taste of a wine, like the taste of everything, is not merely the sum of our inputs, and cannot be solved in a bottom-up fashion. It cannot be deduced by beginning with our simplest sensations and extrapolating upwards. When we taste a wine, we aren’t simply tasting the wine. This is because what we experience is not what we sense.

which seems to me much more correct.

The wine experiment reminded me of GMU economist Robin Hanson‘s blog, Overcoming Bias. I think I’ll write a bit more about this in a later post, but I’ll close with this question: do Robin Hanson and like-thinking economists think it’s rational to believe wine tastes better if you know it’s expensive?

Tagged , , , , ,
%d bloggers like this: