Does AI Safety (and the Effective Altruist Technocracy) Need More of a Grassroots?

The Future of Life Institute released a letter today to the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) conveying concerns regarding lethal autonomous weapons (signed by Elon Musk and covered in The Washington Post and elsewhere). The concerns are grave:
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.
I used to be unduly dismissive of far future concerns (as did many other EAs), but I've been persuaded by books like Superintelligence that getting artificial general intelligence right is one of the most pressing global problems. Given the danger an AI smarter than humans poses, preventing an AI from having lethal weapons at its disposal seems like a really, really big deal.

If that's the case, then I have a knot in my stomach about the circumstances surrounding the UN's decision-making on the matter. A ban on "killer robots" (not explicitly called for in the letter but something the WaPo and others took as implied) is not an easy policy for a government to stomach. What happens when the UN rejects the proposal and adopts an overly weak one that leaves the world as unprepared for killer AI as it was for nuclear weapons?

With many of the countries signing the convention being at least partially democratic, I wonder if public pressure is part of the answer. Even for non-democratic governments, public pressure can matter. I've previously made the case, which I believe still stands for pressing causes, that collective action is an effective way of creating change. Is there a need for a grassroots movement on this issue?

I know many people who have studied this more than me disagree, but there needs to be a way to translate expert opinion and knowledge into policy. How do we do this, and why is or is not a public movement part of the answer?

Comments

  1. Potential disadvantages in focusing on a public movement - rather than other activities such as targeted outreach or research - include:
    - The risk of an antagonistic relationship with the broader AI field (http://effective-altruism.com/ea/129/two_strange_things_about_ai_safety_policy/8im , http://gcrinstitute.org/papers/16-1.pdf).
    - The fact that, relative to arguments for other causes, the AI alignment problem is more difficult to accurately convey to someone without existing knowledge in the area.
    - The relative youth of the field meaning there is more room to reduce uncertainty through research.

    ReplyDelete
    Replies
    1. Thanks for the reply!

      On (1), I don't really find any of the answers in that thread persuasive as it goes for more outreach on AI. It seems that there are good reasons not to argue for slowing down AI development, but I don't buy that gathering more public and official pressure for AI safety is going to have net negative effects even if it angers some AI researchers.
      On (2), I get that but think we may have no way around it. We have to have conversations about this, and if it's really as big of an issue as it seems, those conversations may at least somewhat involve the public. The public is getting more intelligent, in particular in its ability to understand mathematical and logical claims, and I think over time it may be possible to win that one. If anything, we may have to invest more in outreach if this is a difficult tissue to convey.
      (3) strikes me as plausible. I would guess that on things where there's less uncertainty it might make sense to start building up a movement or at least supporting groups that seem to do good things like the "Campaign to Stop Killer Robots," but it could make sense to hold off on a mass movement for friendly AGI in general.

      Delete

  2. My partner and I stumbled over here from a different web page and thought I should check things out. I like what I see so now i am following you. Look forward to checking out your web page for a second time. outlook email login

    ReplyDelete

Post a Comment

Popular posts from this blog

How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong.

The Groffscars ("Oscars") of 2021

Democracy and Altruism (Toward Non-Voters)