Why Can't Steven Pinker and AI Safety Altruists Get Along?

There are few books that have more influenced to my thinking than Steven Pinker's The Better Angels of Our Nature. The book makes a powerful case for effective altruism by showing that much of what effective altruists try to spread—reason and empathy, chiefly—has led to a sweeping decline in virtually every form of human violence over the course of human history. At the same time, I think that Pinker's thesis and evidence in that book are compatible with an understanding that tail risks to human civilization, such as catastrophic risks, may have increased, and animal suffering, has clearly increased in recent history. (Humans' moral views on the latter do clearly seem to be improving, though.) 

I've found it puzzling, then, that to coincide with the publication of his book Enlightenment Now, Pinker has been publishing multiple articles criticizing altruists who are focused on addressing long-term risks, primarily from artificial general intelligence. Pinker disagrees with the view outlined in the AI alignment problem that there is a significant risk that artificial intelligence will produce catastrophic harm. I've found the criticisms surprising in part because I don't see how a small number of people focusing on the alignment problem can be a serious problem. It's been dispiriting, in turn, to see AI safety advocates turning against Pinker's work, which I think has threads that support any effective altruist's efforts.

In an op-ed a week ago, Pinker laid out his case for why focusing on the problem–or "moaning about doom," in his literary flourish–is harmful, and I think an examination suggests his views and those of AI safety researchers should not be so far apart.

First, Pinker warns, "But apocalyptic thinking has serious downsides. One is that false alarms to catastrophic risks can themselves be catastrophic." He cites the Cold War nuclear arms race, the Iraq war, and the maintenance of nuclear weapons as a deterrent to biological weapons and cyberattacks. If we talk too much about catastrophe, we risk creating it.

The first two examples here are ill-fitting because Pinker himself thinks that nuclear weapons are a significant problem, and presumably he would not discourage people from talking about nuclear weapons simply because doing so risks adverse consequences. Clearly talk about nuclear weapons can be done in such a way as to reduce and not exacerbate risks. Fear-mongering about a specific nuclear actor may lead to an arms race. Talk about incremental disarmament should not. Why can't there be a similar rhetorical distinction with AI?

The second caution about doom-mongering is that "humanity has a finite budget of resources, brainpower and anxiety... Cognitive psychologists have shown that people are poor at assessing probabilities, especially small ones, and instead play out scenarios in their mind's eye. If two scenarios are equally imaginable, they may be considered equally probable, and people will worry about the genuine hazard no more than about the science-fiction plot line."

Here the worry is about the conjunct fallacy, wherein people believe that a highly specific and therefore unlikely scenario is more likely to happen than it is. Catastrophic risk scholars are keenly aware of this fallacy and making systematic efforts to avoid it and other cognitive biases. From what I have seen, the AI safety community is investing serious effort in following the science of prediction, including Philip Tetlock's Good Judgment Project and Robin Hanson's prediction markets. That's not to say anyone is immune from cognitive fallacies, but there is more work to be done here to argue against AI safety advocacy. Most importantly, there is a careful and tempered case for AI safety that does not, in my view, rely on cognitive biases (see the Open Philanthropy Project's write up, for instance). 

I agree with the worry about resources, but it ultimately begs the question. Of course, we should spend resources on AI if and only if it is a serious risk. The mere fact that there have been many mistaken predictions of the future in the past can't lead us to write off all such worries, and there is a case for worrying about AI that at the same time recognizes AI's potential for humanity and the risk's small probability. That case is strong enough that a reasonable person acquainted with it would probably want at least some amount of resources, even if modest, going to the problem.


Pinker's third argument involves the "cumulative psychological effects of the drumbeat of doom," which will lead people to conclude that we should, "Eat, drink and be merry, for tomorrow we die!" Humanity will neglect near-term problems while obsessing over risks so small it is impossible to know how large they are.

I share Pinker's worry here to some degree after having seen some in the effective altruist community neglect near-term goods, such as not harming animals or common manners, in order purportedly to maximize long-term productivity. For the most part that behavior is rare or not as motivated by long-term worries as people say, and I would avoid tarring too many with the same brush. Still, I do think AI safety advocates could be a bit more conscious of this risk.

Ultimately, I think Pinker misses that most AI safety researchers–aside from Elon Musk—increasingly avoid hyperbole ("moaning"). A few years ago, the common argument for AI safety used the massive negative consequences misaligned AI could have to justify a vanishing risk, an argument sometimes pointed to as what philosopher Nick Bostrom has termed "Pascal's mugging." Now, though, that sort of argument is much rarer. Instead, books like Superintelligence argue that not only are the consequences large, but also the chance of an AI disaster is not that small. Advocates emphasize that AI will likely be a very good thing for humanity (see 80,000 Hours's profile, for example), but we need to make sure that it's that and not a bad thing.

These sorts of attitudes, I think, are less likely to lead to most of the bad consequences Pinker worries about. (Though I do think AI researchers and advocates could do a better job making that clear—see my note above about respecting near-term norms.) Tellingly for me, when I suggested this past summer that AI safety researchers should spread more awareness of the risk, I received a significant blowback. The AI safety community was clear that hyperbole on AI could be very, very bad and that doom-mongering was the last thing they wanted.

There is an argument to be had about the magnitude of the AI risk, but AI safety researchers and advocates are not, in my view, "moaning about doom." Their worldview is instead largely compatible with Pinker's. Humanity has made tremendous progress and likely will do so, thanks to AI, so let's minimize the—small—chance that we screw up.

Comments

  1. I think this coincides with Dunja's article: effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy

    ReplyDelete
    Replies
    1. I'm not sure what the connection is. It seems like the topic there is pretty different, no?

      Delete
  2. You focused on his arguments from the article in 'The Globe and Mail', but reading Pinker's op-ed in Pop Sci, his arguments for why to not expect advanced AI to be catastrophic aren't as rudimentary as the objections to AI safety concerns from other public intellectuals. It would be interesting, then, to see Pinker's perspective reconciled with that of AI safety advocates, because I think we could learn a lot about how to flesh out how we communicate and develop ideas from the AI safety field.

    ReplyDelete
  3. I'm surprised you've seen AI safety advocates turning on Pinker's work. Is this just his recent op-eds and 'Enlightenment Now', or are they criticizing Pinker's work more generally? I ask because it's my impression members of the EA and rationality communities are typically big fans of Pinker's evidence-based, humanistic approach to reflecting on society.

    ReplyDelete
    Replies
    1. It's mainly his recent stuff, but I've seen it extended to criticisms of his work more generally. It's mostly in Facebook statuses and the like, so it's hard to compile. I would say that there are rationalists who are less optimistic than Pinker, and I think some AI safety advocates hold to a less optimistic view and think such a view is more fitting for someone concerned with AI safety.

      Delete

Post a Comment

Popular posts from this blog

How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong.

The Groffscars ("Oscars") of 2021

Democracy and Altruism (Toward Non-Voters)