A thought experiment touched off by Cathy’s latest post on value-added modeling.
Suppose I’m in charge of a big financial firm and I made every trader who worked for me fill out an NCAA tournament bracket. Then, every year, I fired the people whose brackets ended up in the lowest quintile.
This makes sense, right? Successful prediction of college basketball games involves a lot of the same skills you want traders to have: an ability to aggregate information about uncertain outcomes, fluency in quantitative reasoning, a certain degree of strategic thinking (what choices do you make if your objective is to minimize the probability that your bracket is in the bottom 20%? What if your fellow traders are also following the same strategy…?) You might even do a study that finds that firms whose traders did better at bracket prediction actually ended up with better returns 5 years later. Even if the effect is small, that might add up to a lot of money. Yes, the measure isn’t perfect, but why wouldn’t I want to fire the people who, on average, are likely to make less money for my firm?
And yet we wouldn’t do this, right? Just because we think it would be obnoxious to fire people based on a measure predominantly not under their control. At least we think this when it comes to high-paid financial professionals. Somehow, when it comes to schoolteachers, we think about it differently.