Tag Archives: rationality

Michael Harris on Elster on Montaigne on Diagoras on Abraham Wald

Michael Harris — who is now blogging! — points out that Montaigne very crisply got to the point I make in How Not To Be Wrong about survivorship bias, Abraham Wald, and the missing bullet holes:

Here, for example, is how Montaigne explains the errors in reasoning that lead people to believe in the accuracy of divinations: “That explains the reply made by Diagoras, surnamed the Atheist, when he was in Samothrace: he was shown many vows and votive portraits from those who have survived shipwrecks and was then asked, ‘You, there, who think that the gods are indifferent to human affairs, what have you to say about so many men saved by their grace?’— ‘It is like this’, he replied, ‘there are no portraits here of those who stayed and drowned—and they are more numerous!’ ”

The quote is from Jon Elster, Reason and Rationality, p.26.

Tagged , , ,

What is it like to be a vampire and/or parent?

Andrew Gelman contemplates a blog post of L.A. Paul and Kieran Healy (based on a preprint of Paulwhich asks:  it is possible to make rational decisions about whether to have children?

Paul and Healy’s argument is that, given the widely accepted claim that childbearing is a transformational event whose nature it’s impossible to convey to those who haven’t done it, it may be impossible for people to use the usual “what would it be like to to X?” method of deciding whether to have a kid.

Gelman says:

…even though you can’t know how it will feel after you have the baby, you can generalize from others’ experiences. People are similar to each other in many ways, and you can learn a lot about future outcomes by observing older people (or by reading research such as that popularized by Kahneman, regarding predicted vs. actual future happiness). Thus, I think it’s perfectly rational to aim to have (or not have) a child, with the decision a more-or-less rational calculation based on extrapolation from the experiences of older people, similar to oneself, who’ve faced the same decision earlier in their lives.

Here’s how I’d defend Paul and Healy from this objection.

Suppose you had a lot of friends who’d been bitten by vampires and transformed into immortal soulless monsters.  And when you meet up with these guys they’re always going on and on about how awesome it is being a vampire:  “I’m totally glad I became undead, I’d never go back to being human, are you kidding me?  Now I’m superstrong, I’m immortal, I have this great group of vampires I run with, I feel like I really know what it’s all about now in a way I didn’t get before.  Life has meaning, life has purpose.  I can’t really explain it, you just gotta do it.”  And you know, you sort of wish they’d be a little less rah-rah about it, like, do you have to post a picture on Facebook of every person you kill and eat?  You’re a vampire, that’s what you do, I get it!  But at the same time you can’t help starting to wonder whether they’re on to something.

AND YET:

I don’t think it’s actually good decision-making to say:  people similar to me became vampires and prefer that to their former lives as humans, so I should become a vampire too.  Because the vampire is not the same being as the human who used to occupy that body.  Who cares whether vampires like being vampires better than they like being human?  What matters is what I prefer, not what the vampiric version of me would prefer.  And I, a human, prefer not to be a vampire.

As for me, I’m a parent, and I don’t think that my identity underwent a radical transformation.  I’m the same person I was, but with two kids.   So when I tell friends it’s my experience that having kids is pretty worthwhile, I’m not saying that from across an unbridgable perceptual divide — I’m saying that I am still similar to you, and I like having kids, so you might too.  Paul and Healy’s argument doesn’t refer to my case at all:  they’re just saying that if parents are about as different from non-parents as vampires are from humans, then there’s a real difficulty in deciding whether to have children based on parents’ testimonies, however sincere.

(Remark:  Invasion of the Body Snatchers is sort of about the question Paul and Healy raise.  Many have understood the original movie as referring to Communism, but it might be interesting to go back and watch it as a movie about childbearing.  It is, after all, about gross slimy little creatures that grow in the dark and sustain themselves on your body.  And then the new being known as “you” goes around trying to convince others that the experience is really worth it!)

Update:  Kieran points out that the reference to “body-snatching” is already present in their original post — I must have read this, forgotten it, then thought I’d come up with it as an apposite example myself….

Tagged , , , , , ,

More on the end of history: what is a rational prediction?

It’s scrolled off the bottom of the page now, but there’s an amazing comment thread going on under my post on “The End of History Illusion,” the Science paper that got its feet caught in a subtle but critical statistical error.

Commenter Deinst has been especially good, digging into the paper’s dataset (kudos to the authors for making it public!) and finding further reasons to question its conclusions.  In this comment, he makes the following observation:  Quoidbach et al believe there’s a general trend to underestimate future changes in “favorites,” testing this by studying people’s predictions about their favorite movies, food, music, vacation, hobbies, and their best friends, averaging, and finding a slightly negative bias.  What Deinst noticed is that the negative bias is almost entirely driven by people’s unwillingness to predict that they might change their best friend.  On four of the six dimensions, respondents predicted more change than actually occurred.  That sounds much more like “people assign positive moral value to loyalty to friends” than “people have a tendency across domains to underestimate change.”

But here I want to complicate a bit what I wrote in the post.  Neither Quoidbach’s paper nor my post directly addresses the question:  what do we mean by a “rational prediction?”  Precisely:  if there is an outcome which, given the knowledge I have, is a random variable Y, what do I do when asked to “predict” the value of Y?  In my post I took the “rational” answer to be EY.  But this is not the only option.  You might think of a rational person as one who makes the prediction most likely to be correct, i.e. the modal value of Y.  Or you might, as Deinst suggests, think that rational people “run a simulation,” taking a random draw from Y and reporting that as the prediction.

Now suppose people do that last thing, exactly on the nose.  Say X is my level of extraversion now, Y is my level of extraversion in 10 years, and Z is my prediction for the value of Y.  In the model described in the first post, the value of Z depends only on the value of X; if X=a, it is E(Y|X=a).  But in the “run a simulation” model, the joint distribution of X and Z is exactly the same as the joint distribution of X and Y; in particular, E(|Z-X|) and E(|Y-X|) agree.

I hasten to emphasize that there’s no evidence Quoidbach et al. have this model of prediction in mind, but it would give some backing to the idea that, absent an “end of history bias,” you could imagine the absolute difference in their predictor condition matching the absolute difference in the reporter condition.

There’s some evidence that people actually do use small samples, or even just one sample, to predict variables with unknown distributions, and moreover that doing so can actually maximize utility, under some hypotheses on the cognitive cost of carrying out a more fully Bayesian estimate.

Does that mean I think Quoidbach’s inference is OK?  Nope — unfortunately, it stays wrong.

It seems very doubtful that we can count on people hewing exactly to the one-sample model.

Example:  suppose one in twenty people radically changes their level of extraversion in a 10-year interval.  What happens if you ask people to predict whether they themselves are going to experience such a change in the next 10 years?  Under the one-sample model, 5% of people would say “yes.”  Is this what would actually happen?  I don’t know.  Is it rational?  Certainly it fails to maximize the likelihood of being right.  In a population of fully rational Bayesians, everyone would recognize shifts like this as events with probabiity less than 50%, and everyone would say “no” to this question.  Quoidbach et al. would categorize this result as evidence for an “end of history illusion.”  I would not.

Now we’re going to hear from my inner Andrew Gelman.  (Don’t you have one?  They’re great!)  I think the real problem with Quoidbach et al’s analysis is that they think their job is to falsify the null hypothesis.  This makes sense in a classical situation like a randomized clinical trial.  Your null hypothesis is that the drug has no effect.  And your operationalization of the null hypothesis — the thing you literally measure — is that the probability distribution on “outcome for patients who get the drug” is the same as the one on “outcome for patients who don’t get the drug.”  That’s reasonable!  If the drug isn’t doing anything, and if we did our job randomizing, it seems pretty safe to assume those distributions are the same.

What’s the null hypothesis in the “end of history” paper?   It’s that people predict the extent of personality change in an unbiased way, neither underpredicting nor overpredicting it.

But the operationalization is that the absolute difference of predictions, |Z-X|, is drawn from the same distribution as the difference of actual outcomes, |Y-X|, or at least that these distributions have the same means.  As we’ve seen, even without any “end of history illusion”, there’s no good reason for this version of the null hypothesis to be true.  Indeed, we have pretty good reason to believe it’s not true.  A rejection of this null hypothesis tells us nothing about whether there’s an end of history illusion.  It’s not clear to me it tells you anything at all.

 

 

 

 

 

Tagged , , , , , ,

Holden Karnovsky on the perils of expected utility

I asked a while back how seriously we should take expected utility computations that rely on multiplying very large utilities by very small probabilities.  This kind of computation makes me anxious.  Holden Karnovsky of GiveWell agrees, arguing that we are constrained by some kind of informal Bayesianness not to place too much weight on such computations, especially when the probability computation is one that can’t really be quantitatively well-grounded.  Should you give fifty bucks to an NGO that does malaria prevention in Africa?  Or should you donate it to a group that’s working on ways to deflect asteroids on a collision course with the Earth?  The former donation has a substantial probability of helping a single person or family in a reasonably serious way (medium probability of medium utility.)  The latter donation is attached to the very, very large utility of saving the human race from being wiped out; on the other hand, the probability of achieving this utility is some combination of the chance that a humanity-killing asteroid will be on course to strike the earth in the near term, and the chance that the people asking for your money actually have some prospect of success.  You can make your best guess as to the extent to which your fifty dollars decreases the chance of global extinction; and you might find, on this ground, that the expected value of the asteroid contribution is greater than that of the malaria contribution.  Karnovsky says you should still go with malaria.  I’m inclined to think he’s right.  One reason:  a strong commitment to expected utility makes you vulnerable to Pascal’s Mugging.

 

 

Tagged , , , ,

Cosma Shalizi 1, game theory 0

From Cosma’s review of a new book on social network analysis:

What game theorists somewhat disturbingly call rationality is assumed throughout—in other words, game players are assumed to be hedonistic yet infinitely calculating sociopaths endowed with supernatural computing abilities.

 

Tagged , , ,

Reader survey: how seriously do you take expected utility?

Slate reposted an old piece of mine about the lottery, on the occasion of tonight’s big Mega Millions drawing.  This prompted an interesting question on Math Overflow:

I have often seen discussions of what actions to take in the context of rare events in terms of expected value. For example, if a lottery has a 1 in 100 million chance of winning, and delivers a positive expected profit, then one “should” buy that lottery ticket. Or, in a an asteroid has a 1 in 1 billion chance of hitting the Earth and thereby extinguishing all human life, then one “should” take the trouble to destroy that asteroid.

This type of reasoning troubles me.

Typically, the justification for considering expected value is based on the Law of Large Numbers, namely, if one repeatedly experiences events of this type, then with high probability the average profit will be close to the expected profit. Hence expected profit would be a good criterion for decisions about common events. However, for rare events, this type of reasoning is not valid. For example, the number of lottery tickets I will buy in my lifetime is far below the asymptotic regime of the law of large numbers.

Is there any justification for using expected value alone as a criterion in these types of rare events?

This, to me, is a hard question.  Should one always, as the rationality gang at Less Wrong likes to say, “shut up and multiply?” Or does multiplying very small probabilities by very large values inevitably yield confused and arbitrary results?

UpdateCosma Shalizi’s take on lotteries and utilities, winningly skeptical as usual.

Tagged , , , , , ,

Irrational likred

Deane Yang asks in comments:  “What athletes do you especially like?”  That’s actually what I was going to post about today anyway.  A short list, excluding people who play for teams I follow:  Rickey Henderson.  Manny Ramirez.  Barry Bonds.  Jim Thome.  Nomar Garciaparra.  Edgar Martinez.  Randall Cunningham.  Ricky Williams.  Jake Plummer.  Gus Frerotte.  Surya Bonaly.  Arantxa Sanchez.

Tagged ,

Irrational hatred and the Super Bowl

I had never seen Peyton Manning play football until the last five minutes of tonight’s Super Bowl.  But I always rooted against him.  Just didn’t like the guy, while not knowing anything about him.  I have the same sour feeling about some other athletes — Tiger Woods, Derek Jeter, Jim McMahon, Nancy Kerrigan, Michael Phelps — but these are all people I’ve seen play.

I found the last five minutes of the Super Bowl extremely satisfying, justifiably or not.

Tagged , , ,

Laza and Kuznetsov on cubic fourfolds

By chance, Wisconsin had two seminars about cubic fourfolds on consecutive days last week, by means of which I learned much more about cubic fourfolds than I had in my life up to now.  Summary follows — based on my understanding of what was said, so all mistakes belong to me and not the speakers.

Radu Laza talked about his work on the moduli space of cubic fourfolds. One way you can study families of varieties is via the period map, which sends a variety X to the Hodge structure on H^i(X), for some degree i. It follows from a 1985 theorem of Claire Voisin that the period map from the space of cubic fourfolds X to Hodge structures on H^4(X) is injective; so the moduli space of cubic fourfolds is a subvariety of the moduli space of Hodge structures of the right shape, which is a nice 20-dimensional ball quotient. Laza goes further, computing the precise image of the period map, and thus giving a very clean description of the moduli space.

Alexander Kuznetsov gave a talk the following day on the rationality of cubic fourfolds.  Some cubic fourfolds are rational; for instance, if X is a cubic fourfold containing two skew planes P and Q, then you get a map from P x Q to X sending (p,q) to the third point of intersection of the line pq with X, and this map is birational.  The generic cubic fourfold, on the other hand, is conjectured to be non-rational; but I believe that no example of a provably non-rational cubic fourfold is known.

Kuznetsov’s idea is to approach this problem from the viewpoint of the derived category.  The derived category of a smooth cubic fourfold has a “semi-orthogonal decomposition” into a bunch of simple pieces, which don’t depend on X,  and one interesting piece, a subcategory we call A_X.  The category A_X isn’t a birational invariant, but it does behave nicely under basic birational operations — when you blow up a smooth subvariety Z of X (necessarily of dimension 0,1, or 2) you find that A_X simply “picks up a copy of A_Z.”  In particular, if X is birational to P^4, A_X must be “made out of” pieces coming from derived categories of varieties of dimension at most 2.  Kuznetsov believes this criterion can be used in practice to obstruct the rationality of X.

If this sounds familiar, it’s because it’s explicitly modeled on the Clemens-Griffiths obstruction to rationality of a cubic threefold Y.  There, the role of A_X is played by the intermediate Jacobian J(Y); and Clemens and Griffiths prove that if J is not isomorphic to the Jacobian of a curve, Y can’t be rational.  A critical role is played here by the semistability of the category of principally polarized abelian varieties; this doesn’t hold for triangulated categories, but Kuznetsov believes that a suitable version of semistability should apply to some class of categories including A_X.

If X is a cubic fourfold, the Fano variety F(X) parametrizing lines in X is again a fourfold, and is deformation equivalent to the Hilbert scheme parametrizing pairs of points on a K3.  This fact is omnipresent in Laza’s work, and it shows up for Kuznetsov too:  it turns out that in the cases where X is known to be rational (for instance, the infinite families produced by Brendan Hassett in his thesis) the Fano variety is not just a deformation of, but actually is, the Hilbert scheme Hilb^2 S for some K3 surface S.  And in this case, Kuznetsov’s A_X is nothing but the derived category D^b(S).

This might lead one to ask whether one could make a conjecture that dispensed with derived categories entirely (though I feel guilty and antique for suggesting that this might be a good thing!) and guess that X is rational exactly when F(X)  is Hilb^2 of a K3 surface S.  I think would be pretty close to a kind of Hodge-theoretic criterion for rationality, as in the cubic threefold case.

But if I understand correctly, Kuznetsov doesn’t think it can be that simple.  There is a divisor in the moduli space of cubic fourfolds which is naturally identified with the moduli space of K3 surfaces of some given degree.  Hassett shows that on this 19-dimensional variety there is a countable union of 18-dimensional families of rational cubic fourfolds.  Kuznetsov says that if X is a point on this divisor, and S the corresponding K3, that A_X is not the derived category D^b(S), but a twisted derived category D^b(S,alpha) for some Brauer class alpha on S which is generically nonvanishing (but vanishes on Hassett’s locus.)   And this twist, he believes, obstructs rationality — though without a good enough “semistability” property for the categories involved, nothing can yet be proved.

Tagged , , , , , , , ,

I overrate expensive wine and I will not apologize

A recent study shows that most people rate wine as tastier if it has a fancy label. Jonah Lehrer, from his forthcoming book How We Decide, writes:

Twenty people sampled five Cabernet Sauvignons that were distinguished solely by their retail price, with bottles ranging from $5 to $90. Although the people were told that all five wines were different, the scientists weren’t telling the truth: there were only three different wines. This meant that the same wines would often reappear, but with different price labels. For example, the first wine offered during the tasting – it was a cheap bottle of Californian Cabernet – was labeled both as a $5 wine (it’s actual retail price) and as a $45 dollar wine, a 900 percent markup…. Not surprisingly, the subjects consistently reported that the more expensive wines tasted better. They preferred the $90 bottle to the $10 bottle, and thought the $45 Cabernet was far superior to the $5 plonk.

Of course, the wine preferences of the subjects were clearly nonsensical. Instead of acting like rational agents – getting the most utility for the lowest possible price – they were choosing to spend more money for an identical product.

I think the wine preferences of the subjects were clearly not nonsensical. Maybe an unlabelled $40 bottle of wine tastes no better than a $5 unlabelled bottle of wine. But that’s why people don’t buy unlabelled bottles of wine! The utility of the wine you drink isn’t contained in the molecules striking your tongue and your nose; you’re enjoying the possession of something people have agreed to value. When you travel three hours to eat the best barbecue in Texas, the long drive and the long wait are part of what you’re paying for. If you think that’s nonsensical, you’ve got problems with people’s behavior that go way past their selections from the wine list.

Note also: subjects with an expertise in wine did recognize, and prefer, the pricier wines. So consider the following experiment: give a heterogeneous group of readers a selection of novels by Tom Clancy and F. Scott Fitzgerald, with the covers torn off. You might find that the 14-year-olds in the group rated the two groups of novels equally, while those with an expertise in literature preferred the Fitzgerald, even without the identification. Now suppose one of the 14-year-olds, with knowledge of these results, was offered the choice of a book by TC or a book by FSF for twice the price. And let’s say this 14-year-old reasons, “The experiment suggests I’ll like these books equally; but my teachers and my parents say that Fitzgerald is great literature and Tom Clancy is trash, so maybe I’d better take their word for it and try the Fitzgerald.” Is the teenager’s behavior clearly nonsensical?

Or maybe the example of JT Leroy is a little less scale-thumby. People are less interested in his books now that we know the author isn’t who he claimed to be — isn’t even, in fact, a he. Same books, same sentences. Is that nonsensical?

By the way, I’m not really imputing to Lehrer the view he asserts in his book: in an earlier blog post on a similar study, he writes

What these experiments neatly demonstrate is that the taste of a wine, like the taste of everything, is not merely the sum of our inputs, and cannot be solved in a bottom-up fashion. It cannot be deduced by beginning with our simplest sensations and extrapolating upwards. When we taste a wine, we aren’t simply tasting the wine. This is because what we experience is not what we sense.

which seems to me much more correct.

The wine experiment reminded me of GMU economist Robin Hanson‘s blog, Overcoming Bias. I think I’ll write a bit more about this in a later post, but I’ll close with this question: do Robin Hanson and like-thinking economists think it’s rational to believe wine tastes better if you know it’s expensive?

Tagged , , , , ,
%d bloggers like this: