Expected Utility and the Case Against Strong Longtermism [Technical]

For my readers who are particularly interested in effective altruism and longtermism, Vaden Masrani makes "A Case Against Strong Longtermism":

Mathematicians tend to think of expected values the way they think of the pythagorean theorem - i.e. as a mathematical identity which can be useful in some circumstances. But within the EA community, expected values are taken very seriously indeed.4 One reason for this is the link between expected values and decision making, namely that “under some assumptions about rational decision making, people should always pick the project with the highest expected value”. Now, if my assumptions about rational decision making lead to fanaticism, paradoxes, and cluelessness, I might revisit the assumptions.

and

Near the end of Conjectures and Refutations, Popper criticizes the Utopianist attitude of those who claim to be able to see far into future, who claim to see distant far away evils and ideals, and who claim to have knowledge that can only ever come from “our dreams and from the dreams of our poets and prophets”.

I'm glad this conversation is happening. Longtermism is a persuasive and highly-demanding view, and for that reason it merits a high level of scrutiny.

Ultimately, I don't find the arguments persuasive. The utopian worry is a good one, and we should absolutely avoid utopianism. The point about expected utility is off-base, though. Here's what I had to say:

I'll start with the expected value argument, specifically the note that probabilities here are uncertain and therefore random valuables, whereas in traditional EU they're constant. To me a charitable version of Greaves and MacAskill's argument is that, taking the expectation over the probabilities times the outcomes, you have a large future in expectation. (What you need for the randomness of probabilities to sink longtermism is for the probabilities to correlate inversely and strongly with the size of the future.) I don't think they'd claim the probabilities are certain. 
 
Maybe the claim you want to make, then, is that we should treat random probabilities differently from certain probabilities, i.e. you should not "take expectations" over probabilities in the way I've described. The problem with this is that (a) alternatives to taking expectations over probabilities have been explored in the literature, and they have a lot of undesirable features; and (b) alternatives to taking expectations over probabilities do not necessarily reject longtermism. I'll discuss (b), since it involves providing an example for (a). 
 
(b) In economics at least, Gilboa and Schmeidler (1989) propose what's probably the best-known alternative to EU when the probabilities are uncertain, which involves maximizing expected utility for the prior according to which utility is the lowest, sort of a meta-level risk aversion. They prove that this is the optimal decision rule according to some remarkably weak assumptions. If you take this approach, it's far from clear you'll reject longtermism: more likely, you end up with a sort of longtermism focused on averting long-term suffering, i.e. focused on maximizing expected value according to the most pessimistic probabilities. There's a bunch of other approaches, but they tend to have similar flavors. So alternatives on EU may agree on longtermism and just disagree on the flavor of it. 
 
(a) Moving away from EU leads to a lot of problems. As I'm sure you know given your technical background, EU derives from a really nice set of axioms (The Savage Axioms). Things go awry when you leave it. Al-Najjar and Weinstein (2009) offer a persuasive discussion of this (H/T Phil Trammell). For example, non-EU models imply information aversion. Now, a certain sort of information aversion might make sense in the context of longtermism. In line with your Popper quote, it might make sense to avoid information about the feasibility of highly-specific future scenarios. But that's not really the sort of information non-EU models imply aversion to. Instead, they imply aversion to info that would shift you toward the option that currently has a lot of ambiguity about it because you dislike it based on its current ambiguity.

Permalink to my comment here.

Comments

Popular posts from this blog

How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong.

We're Here, We're Queer, Get Used to It: A Lesson the Animal Rights Movement Could Learn

What I Learned from a Year Spent Studying How to Get Policymakers to Use Evidence