I asked a while back how seriously we should take expected utility computations that rely on multiplying very large utilities by very small probabilities. This kind of computation makes me anxious. Holden Karnovsky of GiveWell agrees, arguing that we are constrained by some kind of informal Bayesianness not to place too much weight on such computations, especially when the probability computation is one that can’t really be quantitatively well-grounded. Should you give fifty bucks to an NGO that does malaria prevention in Africa? Or should you donate it to a group that’s working on ways to deflect asteroids on a collision course with the Earth? The former donation has a substantial probability of helping a single person or family in a reasonably serious way (medium probability of medium utility.) The latter donation is attached to the very, very large utility of saving the human race from being wiped out; on the other hand, the probability of achieving this utility is some combination of the chance that a humanity-killing asteroid will be on course to strike the earth in the near term, and the chance that the people asking for your money actually have some prospect of success. You can make your best guess as to the extent to which your fifty dollars decreases the chance of global extinction; and you might find, on this ground, that the expected value of the asteroid contribution is greater than that of the malaria contribution. Karnovsky says you should still go with malaria. I’m inclined to think he’s right. One reason: a strong commitment to expected utility makes you vulnerable to Pascal’s Mugging.