Tag Archives: andrew gelman

Should Andrew Gelman have stayed a math major?

Andrew writes:

As I’ve written before, I was a math and physics major in college but I switched to statistics because math seemed pointless if you weren’t the best (and I knew there were people better than me), and I just didn’t feel like I had a good physical understanding.

But every single mathematician, except one, is not the best (and even that person probably has to concede that there are still greater mathematicians who happen to be dead.)  Surely that doesn’t make our work pointless.

This myth — that the only people who matter in math are people at the very top of a fixed mental pyramid, people who are identified near birth and increase their lead over time, that math is for them and not for us — is what I write about in today’s Wall Street Journal, in a piece that’s mostly drawn from How Not To Be Wrong.  I quote both Mark Twain and Terry Tao — how’s that for appeal to authority?  The corresponding book section also has stuff about Hilbert and Minkowski (guess which one was the prodigy!) Ramanujan, and an extended football metaphor which I really like but which was too much of a digression for a newspaper piece.

There’s also a short video interview on WSJ Live where I talk a bit about the idea of the genius.

In other launch-related publicity, I was on Slate’s podcast, The Gist, talking to Mike Pesca about the Laffer curve and the dangers of mindless linear regression.

More book-related stuff coming next week; stay tuned!

Update:  Seems like I misread Andrew’s post; I thought when he said “switched” he meant “switched majors,” but actually he meant he kept studying math and then moved into a (slightly!) different career, statistics, where he used the math he learned: exactly what I say in the WSJ piece I want more people to do!

Tagged , ,

What is it like to be a vampire and/or parent?

Andrew Gelman contemplates a blog post of L.A. Paul and Kieran Healy (based on a preprint of Paulwhich asks:  it is possible to make rational decisions about whether to have children?

Paul and Healy’s argument is that, given the widely accepted claim that childbearing is a transformational event whose nature it’s impossible to convey to those who haven’t done it, it may be impossible for people to use the usual “what would it be like to to X?” method of deciding whether to have a kid.

Gelman says:

…even though you can’t know how it will feel after you have the baby, you can generalize from others’ experiences. People are similar to each other in many ways, and you can learn a lot about future outcomes by observing older people (or by reading research such as that popularized by Kahneman, regarding predicted vs. actual future happiness). Thus, I think it’s perfectly rational to aim to have (or not have) a child, with the decision a more-or-less rational calculation based on extrapolation from the experiences of older people, similar to oneself, who’ve faced the same decision earlier in their lives.

Here’s how I’d defend Paul and Healy from this objection.

Suppose you had a lot of friends who’d been bitten by vampires and transformed into immortal soulless monsters.  And when you meet up with these guys they’re always going on and on about how awesome it is being a vampire:  “I’m totally glad I became undead, I’d never go back to being human, are you kidding me?  Now I’m superstrong, I’m immortal, I have this great group of vampires I run with, I feel like I really know what it’s all about now in a way I didn’t get before.  Life has meaning, life has purpose.  I can’t really explain it, you just gotta do it.”  And you know, you sort of wish they’d be a little less rah-rah about it, like, do you have to post a picture on Facebook of every person you kill and eat?  You’re a vampire, that’s what you do, I get it!  But at the same time you can’t help starting to wonder whether they’re on to something.

AND YET:

I don’t think it’s actually good decision-making to say:  people similar to me became vampires and prefer that to their former lives as humans, so I should become a vampire too.  Because the vampire is not the same being as the human who used to occupy that body.  Who cares whether vampires like being vampires better than they like being human?  What matters is what I prefer, not what the vampiric version of me would prefer.  And I, a human, prefer not to be a vampire.

As for me, I’m a parent, and I don’t think that my identity underwent a radical transformation.  I’m the same person I was, but with two kids.   So when I tell friends it’s my experience that having kids is pretty worthwhile, I’m not saying that from across an unbridgable perceptual divide — I’m saying that I am still similar to you, and I like having kids, so you might too.  Paul and Healy’s argument doesn’t refer to my case at all:  they’re just saying that if parents are about as different from non-parents as vampires are from humans, then there’s a real difficulty in deciding whether to have children based on parents’ testimonies, however sincere.

(Remark:  Invasion of the Body Snatchers is sort of about the question Paul and Healy raise.  Many have understood the original movie as referring to Communism, but it might be interesting to go back and watch it as a movie about childbearing.  It is, after all, about gross slimy little creatures that grow in the dark and sustain themselves on your body.  And then the new being known as “you” goes around trying to convince others that the experience is really worth it!)

Update:  Kieran points out that the reference to “body-snatching” is already present in their original post — I must have read this, forgotten it, then thought I’d come up with it as an apposite example myself….

Tagged , , , , , ,

More on the end of history: what is a rational prediction?

It’s scrolled off the bottom of the page now, but there’s an amazing comment thread going on under my post on “The End of History Illusion,” the Science paper that got its feet caught in a subtle but critical statistical error.

Commenter Deinst has been especially good, digging into the paper’s dataset (kudos to the authors for making it public!) and finding further reasons to question its conclusions.  In this comment, he makes the following observation:  Quoidbach et al believe there’s a general trend to underestimate future changes in “favorites,” testing this by studying people’s predictions about their favorite movies, food, music, vacation, hobbies, and their best friends, averaging, and finding a slightly negative bias.  What Deinst noticed is that the negative bias is almost entirely driven by people’s unwillingness to predict that they might change their best friend.  On four of the six dimensions, respondents predicted more change than actually occurred.  That sounds much more like “people assign positive moral value to loyalty to friends” than “people have a tendency across domains to underestimate change.”

But here I want to complicate a bit what I wrote in the post.  Neither Quoidbach’s paper nor my post directly addresses the question:  what do we mean by a “rational prediction?”  Precisely:  if there is an outcome which, given the knowledge I have, is a random variable Y, what do I do when asked to “predict” the value of Y?  In my post I took the “rational” answer to be EY.  But this is not the only option.  You might think of a rational person as one who makes the prediction most likely to be correct, i.e. the modal value of Y.  Or you might, as Deinst suggests, think that rational people “run a simulation,” taking a random draw from Y and reporting that as the prediction.

Now suppose people do that last thing, exactly on the nose.  Say X is my level of extraversion now, Y is my level of extraversion in 10 years, and Z is my prediction for the value of Y.  In the model described in the first post, the value of Z depends only on the value of X; if X=a, it is E(Y|X=a).  But in the “run a simulation” model, the joint distribution of X and Z is exactly the same as the joint distribution of X and Y; in particular, E(|Z-X|) and E(|Y-X|) agree.

I hasten to emphasize that there’s no evidence Quoidbach et al. have this model of prediction in mind, but it would give some backing to the idea that, absent an “end of history bias,” you could imagine the absolute difference in their predictor condition matching the absolute difference in the reporter condition.

There’s some evidence that people actually do use small samples, or even just one sample, to predict variables with unknown distributions, and moreover that doing so can actually maximize utility, under some hypotheses on the cognitive cost of carrying out a more fully Bayesian estimate.

Does that mean I think Quoidbach’s inference is OK?  Nope — unfortunately, it stays wrong.

It seems very doubtful that we can count on people hewing exactly to the one-sample model.

Example:  suppose one in twenty people radically changes their level of extraversion in a 10-year interval.  What happens if you ask people to predict whether they themselves are going to experience such a change in the next 10 years?  Under the one-sample model, 5% of people would say “yes.”  Is this what would actually happen?  I don’t know.  Is it rational?  Certainly it fails to maximize the likelihood of being right.  In a population of fully rational Bayesians, everyone would recognize shifts like this as events with probabiity less than 50%, and everyone would say “no” to this question.  Quoidbach et al. would categorize this result as evidence for an “end of history illusion.”  I would not.

Now we’re going to hear from my inner Andrew Gelman.  (Don’t you have one?  They’re great!)  I think the real problem with Quoidbach et al’s analysis is that they think their job is to falsify the null hypothesis.  This makes sense in a classical situation like a randomized clinical trial.  Your null hypothesis is that the drug has no effect.  And your operationalization of the null hypothesis — the thing you literally measure — is that the probability distribution on “outcome for patients who get the drug” is the same as the one on “outcome for patients who don’t get the drug.”  That’s reasonable!  If the drug isn’t doing anything, and if we did our job randomizing, it seems pretty safe to assume those distributions are the same.

What’s the null hypothesis in the “end of history” paper?   It’s that people predict the extent of personality change in an unbiased way, neither underpredicting nor overpredicting it.

But the operationalization is that the absolute difference of predictions, |Z-X|, is drawn from the same distribution as the difference of actual outcomes, |Y-X|, or at least that these distributions have the same means.  As we’ve seen, even without any “end of history illusion”, there’s no good reason for this version of the null hypothesis to be true.  Indeed, we have pretty good reason to believe it’s not true.  A rejection of this null hypothesis tells us nothing about whether there’s an end of history illusion.  It’s not clear to me it tells you anything at all.

 

 

 

 

 

Tagged , , , , , ,

Was Russian election turnout too non-Gaussian to be real?

 We’ve talked about attempts to prove election fraud by mathematical means before.  This time the election in question is in Russia, where angry protesters marched in the streets with placards displaying the normal distribution.  Why?  Because the turnout figures look really weird.  The higher the proportion of the vote Vladimir Putin’s party received in a district, the higher the turnout; almost as if a more ordinary-looking distribution were being overlaid with a thick coating of Putin votes…  Mikhail Simkin in (extremely worth reading pop-stats magazine) Significance argues there’s no statistical reason to doubt that the election results are legit.  Andrew Gelman is not reassured.

Tagged , , , ,

Split-screen blackboard

From Andrew Gelman, an interesting pedagogical suggestion:

The split screen. One of the instructors was using the board in a clean and organized way, and this got me thinking of a new idea (not really new, but new to me) of using the blackboard as a split screen. Divide the board in half with a vertical line. 2 sticks of chalk: the instructor works on the left side of the board, the student on the right. On the top of each half of the split screen is a problem to work out. The two problems are similar but not identical. The instructor works out the solution on the left side while the student uses this as a template to solve the problem on the right.

Has anyone tried anything like this?  It sounds rather elegant to me.

 

Tagged , ,
%d bloggers like this: