Holden Karnovsky on the perils of expected utility

I asked a while back how seriously we should take expected utility computations that rely on multiplying very large utilities by very small probabilities.  This kind of computation makes me anxious.  Holden Karnovsky of GiveWell agrees, arguing that we are constrained by some kind of informal Bayesianness not to place too much weight on such computations, especially when the probability computation is one that can’t really be quantitatively well-grounded.  Should you give fifty bucks to an NGO that does malaria prevention in Africa?  Or should you donate it to a group that’s working on ways to deflect asteroids on a collision course with the Earth?  The former donation has a substantial probability of helping a single person or family in a reasonably serious way (medium probability of medium utility.)  The latter donation is attached to the very, very large utility of saving the human race from being wiped out; on the other hand, the probability of achieving this utility is some combination of the chance that a humanity-killing asteroid will be on course to strike the earth in the near term, and the chance that the people asking for your money actually have some prospect of success.  You can make your best guess as to the extent to which your fifty dollars decreases the chance of global extinction; and you might find, on this ground, that the expected value of the asteroid contribution is greater than that of the malaria contribution.  Karnovsky says you should still go with malaria.  I’m inclined to think he’s right.  One reason:  a strong commitment to expected utility makes you vulnerable to Pascal’s Mugging.



Tagged , , , ,

5 thoughts on “Holden Karnovsky on the perils of expected utility

  1. This is completely standard.

    The expected return on an investment is only one consideration that goes into its pricing. The other is the expected variance (“risk”) in that return. Two investments, with the same expected return, will not be priced the same, if one carries a higher risk than the other. Investors will demand a “risk premium” for the former.

    Mutatis mutandis for your charitable-giving example. If the expected benefit is the same, your $50 should go to the charitable cause with the lower variance in the the expected outcome.

  2. Terence Tao says:

    I think an epsilon of paranoia is useful to regularise these sorts of analyses. Namely, one supposes that there is an adversary out there who is actively trying to lower your expected utility through disinformation (in order to goad you into making poor decisions), but is only able to affect all your available information by an epsilon. One should then adjust one’s computations of expected utility accordingly. In particular, the contribution of any event that you expect to occur with probability less than epsilon should probably be discarded completely.

  3. Just for context, the name “Holden Karnofsky” makes many MetaFilter members’ jaws twitch: http://mssv.net/wiki/index.php/Givewell

    I see that he’s once again (co-)Executive Director, after having been demoted following the astroturfing foofawraw.

  4. Jonah Sinick says:

    Jordan: Holden’s post doesn’t explicitly argue in favor of donating to groups working on malaria prevention in Africa over groups working to deflect asteroids. As you allude to, the uncertainty attached to an expected value estimate plays a crucial role in whether or not it can be trusted.

    The asteroid risk is well modeled; all uncertainty as to the value of an asteroid time prevention program comes from uncertainty as to whether it would be successfully implemented and uncertainty as to the prospects for human survival & thriving in absence of an asteroid strike. The uncertainty as to the “true” expected value of the average dollar spent on an asteroid strike prevention is greater than the uncertainty attached to the “true” expected value average dollar spent on malaria prevention efforts. Still, one needs to weigh this uncertainty against the stakes involved.

    Jacques: Holden’s point is different from your own; he’s not arguing against risk aversion in principle. The usual grounds for risk aversion (say, in the domain of personal investment) is marginal diminishing utility at the individual level. But for philanthropic efforts It’s often the case that marginal diminishing utility isn’t relevant. See Alan Dawrst’s essay titled “The Case for Risky Investments” http://www.utilitarian-essays.com/risky-investments.html .

    Terry: Certainly there’s something to your point. Further supporting your position is the existence of incentive effects; allowing oneself to be influenced by hypothetical adversaries incentivizes them do more of the same and incentivized others to do the same. But I think that the applicability of the principle that you suggest is context dependent. For example, in the case of asteroid strike risk there seems to be a sufficiently broad consensus among a sufficiently heterogeneous population so that it seems (to me) like the probability of the risk is substantially greater than the probability of the risk being contrived by a hypothetical adversary.

    Graham: I don’t see how your remark adds relevant context. As for the incident that you mention, Holden and GiveWell regret the incident that you mentioned and Holden apologized. See http://www.givewell.org/about/shortcomings#overaggressiveandinappropriatemarketing . (Disclosure: I’ve volunteered for GiveWell and have discussed the possibility of working for them at some point in the future.)

  5. The usual grounds for risk aversion (say, in the domain of personal investment) is marginal diminishing utility at the individual level.

    That is a good point. The “standard” argument for risk-aversion makes two assumptions

    1. Diminishing marginal utility: ( du/dr > 0, d^2u/dr^2 < 0 ).
    2. The probability distribution, p(r), is something like a Gaussian.

    The second assumption clearly doesn't hold in your example(s). As to the first assumption, it's true that the utility function we are seeking to maximize in charitable giving is not the same (individual) utility function that we are seeking to maximize in personal investments.

    I am unconvinced, though, by the argument that you link to, which says that the utility function, that we are seeking to maximize, is strictly linear (du/dr > 0, d^2u/dr^2 = 0).

    I still think that (at least in some circumstances) a case can be made for risk-aversion in charitable giving. But I concede that it’s not a slam-dunk.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: