Thursday, September 21, 2017

Locals Thwart New Kansas Tyson Plant—Why Doesn't This Happen More Often?

http://www.kansascity.com/news/politics-government/article171385947.html
In the annals of social movements, one of the ones that most clearly achieved its objectives was the wave of U.S. anti-nuclear protests in the 1970s. Across the U.S., those who lived near nuclear power plants picketed, blockaded, and disrupted construction of plants, including taking strategic advantage of the Three Mile Island incident to effectively end nuclear power across the U.S. Nuclear power in the U.S. now has a monumental stigma against it quite unlike other developed countries—nuclear power is even a primary source for France's energy infrastructure. (U.S. policy is likely mistaken, as nuclear power is relatively safe).

This week, Kansas offers some inspiration for animal advocates on a model we should consider. Tyson is being forced to back out of a huge new chicken facility after 2,000 out of 5,000 residents of neighboring town Tonganoxie protested last Friday over environmental concerns.

Tyson is scrambling and will likely find somewhere to build not too far from there, but if everywhere they went (and everywhere they already are), they had to contend with massive local protests, that would start to impose a serious cost of doing business on them. Beyond that, it would dramatize the issue in a public way, much as the controversial and indirect environmentalist tactic of targeting pipelines has. NIMBYism is hardly admirable on its own, but why not try to steer it in a productive direction? Let's start gathering our friends and family to make more Tonganoxies.

Tuesday, September 19, 2017

How Sharp a Turn Did Humans Take in the Industrial Revolution?

http://lukemuehlhauser.com/three-wild-speculations-
from-amateur-quantitative-macrohistory/
Luke Muelhauser at the Open Philanthropy Project has a thought-provoking post arguing that most of human history is roughly the same plodding along in boring conditions until the industrial revolution, in which productivity exploded, countering what people have gotten in history. You can see a visual illustration of this in the graph on the right.

I think this gets a whole lot right about the way history has gone. It irrefutably gets a lot right when you talk in terms of sheer magnitude of living conditions: the amount of good things multiplied many times over in a short period of time.

https://mathbitsnotebook.com/Algebra1/
FunctionGraphs/expgaphtrans3.jpg
As one of the commenters pointed out, there could still be importance to earlier eras of history–potentially as much importance as in post-Industrial Revolution history. The reason is this: if our interest is in relative progress, then one man's flat line is another man's explosion. If human progress followed an exponential function, then when an "explosion" happens depends on the scale. There's an illustration of this on the left.

Indeed, the original post contains a graph suggesting that things started to take off around the Renaissance and Enlightenment in the right scale:

It turns out, some commentators crunched the numbers and found that there is more evidence to think that the course of human history really did change in kind and not just degree in the industrial era. If it changed in degree, then the change of degree is sharper than exponential, and history follows a "double exponential function" rather than an exponential one.

One thing that worries me, though, and that makes me think the course of history could be closer to exponential than we think, is the question of how accurate measurement has been–from material goods to war statistics–over the course of human history. Error likely increases in magnitude the further back you go, an exponential curve could easily appear more horizontal in the past, as the growth might still have been undetectable.

A number of "big history" type books (Guns, Germs, and Steel; Better Angels of Our Nature) do seem to suggest there was steady progress over the course of human history. Maybe I'm just having disconfirmation bias, or maybe there's reason to be slow to conclude that all of this history is so much flatter than it seems.

Monday, September 11, 2017

What I've Been Reading/Watching/Listening To

Here are some recent things I've been following and would recommend:

Books:
Pillar of Fire - The second part in a fascinating three-part series on the civil rights movement.
Tales of the City - Serialized fiction by Armistead Maupin in the 1970s on countercultural life in San Francisco.

Articles:
The Unilateralist’s Curse: The Case for Principle of Conformity - A philosophy paper that hits on a surprising dilemma and argues for a conclusion most philosophers would not like.
The Resegregation of Jefferson County - A disheartening New York Times Magazine feature on the state of the South.
We need to nationalise Google, Facebook and Amazon. Here’s why - The title speaks for itself, but I think this is a topic that has had surprisingly little discussion relative to its importance.
How bosses are (literally) like dictators - A Vox piece on workplace democracy, or the lack thereof. Another rarely discussed issue with real importance.

Films:
Hacksaw Ridge - Mel Gibson's recent movie follows a Christian pacifist in World War II.
The Big Sick - A charming comedy about an Indian American, his white girlfriend, his immigrant family, and their trials.
In & Out - An uplifting comedy that gets at surprisingly deep truths about coming out of the closet.

Podcasts:
Kieran Grieg interviewed by Michael Dello-Iacovo - Animal Charity Evaluators and tough questions in that space.
Dr Dario Amodei On OpenAI And How AI Will Change The World For Good And Ill - This one speaks for itself.

Tuesday, September 5, 2017

Sympathizing with the Christian Dissenter in Hacksaw Ridge

I watched Hacksaw Ridge this weekend, Mel Gibson's movie about a literal Christian soldier during World War II who becomes a medic after completing basic training without picking up a gun. He's a Seventh Day Adventist, which makes him a pacifist (as well as a vegetarian).

I found myself empathizing and sympathizing with him more than I'd expected, including in moments that pertained less to his pacifism than to his religion–a puzzling predicament for me as an atheist Jew. Faith is the opposite of how I try to operate. I try to be skeptical of everything and believe things based on proof (all while knowing that this is unattainable).

Yet once I have arrived at a conclusion, and pending further evidence forcing me to revise my beliefs, I believe strongly in acting: whether it's direct activism as I've done in the past, research, or donating money. Acting requires commitment. Even when the evidence points one way, social norms often point the other way. Those social norms require something akin to faith to overcome. 

Monday, September 4, 2017

Is Instinctive Conformism an Actually Rational?

Many of us grow up questioning conformity. Even those who don't go through a teenage rage phase get a good deal of anti-conformity in school thanks to the Enlightenment. It turns out some of the human tendency toward conformity may be rational, and for fairly subtle reasons.

Australia has a large population of wild rabbits from someone acting
unilaterally.
I read up last week on the Unilateralist's Curse, the problem covered in a brilliant philosophy paper by Cambridge's Nick Bostrom (h/t Buck Shlegeris). The Unilateralist's Curse occurs when a member of a group sharing a common altruistic goal takes an action that hurts the goal because that member mistakenly believes the action to be helpful. If members of a group are each appraising the likelihood of an action being helpful and choosing whether to take the action independently, the action is more likely to happen than it should be.
Via https://nickbostrom.com/papers/unilateralist.pdf

An example is this: five people have discovered a technology with the potential to cause grave harm and are deciding whether to release it or not. Even if four out of the five decide it is too dangerous, all it takes is one person to release the technology, and so the technology is more likely to be released than it should be since people have mistaken judgment, and the more people there are in the group, the more likely one of them chooses to release it.

Bostrom recommends we resolve the problem by agreeing to a principle of conforming to groups in situations like the above. This of course goes against a modern tendency to praise defiance of groups and avoid doing something simply because others are.

I find it to be a particularly interesting example of the contrast between thinking of humans as rational agents and thinking of humans as biased agents. Harvard scholar Cass Sunstein, who comes more from a biases perspective, argues that groups make colossally irrational decisions because of humans' tendency toward conformity, which creates groupthink. Sunstein endorses policies to prevent groupthink. Yet here we have a philosopher arguing that for individuals to behave truly rationally, they actually should conform more than they otherwise would do.

Maybe, in fact, humans already are doing what Bostrom advises, but unconsciously. If people conform more than they should in a situation of solitary rational deliberation, we may actually conform to an optimal degree in the unilateralist's curse situation. If that's the case, acting consciously by a "principle of conformity" would not make as much sense as Bostrom advises, because it would push us over the optimal degree of conformity.

The optimal degree of conformity is hard to know. Are we all more rational than we've been led to believe?

Wednesday, August 30, 2017

Some Answers about Policy Outreach on Artificial Intelligence

I asked a question last week about whether efforts to ensure that artificial intelligence is developed safely should include public outreach. This goes significantly against the grain of most people working on AI safety, as the predominant view is that all that is useful right now is research, and even outreach to elites should wait. While I'm still not persuaded that public outreach would be harmful, I was moved toward seeing why it might be a bad idea from a few answers I got:

1) On the core issues, the policy asks have yet to be worked out for ensuring safe development of artificial intelligence. Nobody knows how we actually program AI to be safe, yet. We are so far from that there is little to say.

2) Regulation could tie the ethical AI developers' hands and let bad actors be the ones who develop AI. This argument closely resembles arguments about other regulations: industries flee countries with the most regulations, causing industries to move to less-regulated countries. In most cases I think it's still worth passing the regulation, but it's at least plausible that AI is a case where regulation right now would be bad, especially given (1).

3) Working on AI safety today is very different from working on a risk like climate change because climate change is already happening, and AI safety problems are almost entirely in the future. (There are some today, though.) Working on AI safety today is like working on climate change in 1900.

4) On the specific question of lethal autonomous weapons, it's not clear how harmful these are. A recent post on the effective altruism forum persuaded me that the effect of AI weapons is closer to ambiguous than I'd thought.

Still, I have reservations:

1) It seems there are policy goals that could be achieved in this area. One would be more coordination by the main actors. Another would be regulation on the things that are here today like lethal offensive autonomous weapons, even if a ban may not make sense. Getting the infrastructure in place to deal with these issues could pay off down the road.

2) I don't buy the idea that getting members of the public on board with AI safety would be counterproductive. Sure, members of the public have a worse time understanding and explaining things, but most people are somewhat literate, and scientific literacy is increasing. Polarization does not seem an inevitable result of careful, friendly public outreach–only confrontational outreach. Also, poor explanations and polarization can be outweighed by upsides.

At the end of the day, it does seem clear that this is a conversation to keep having. Outreach directly on the topic of superintelligence may not be helpful, but I still wonder about whether more preparations for the day that superintelligence is near might make sense.

Tuesday, August 29, 2017

Insects Are Going out the Window. How Should We Think About This?

Insect populations are rapidly declining according to scientists (and our cars' windshields):
An amateur German group called the Krefeld Entomological Society has been monitoring insect numbers at 100 nature reserves in Western Europe since the 1980s. Although there were the annual fluctuations they discovered that by 2013 numbers began to plummet by nearly 80 per cent.

Most people likely see this as a huge loss, particularly animal advocates. Environmentalists will of course see this as a huge loss given the effects on many ecosystems.

There's a growing body of literature, however, that suggests a different reaction (for instance, see Simon Knutsson). Much of it is done by lay people, but I hope to be able to study this question academically before long. If insects do feel pleasure and pain, then their lives look pretty lousy. In the vast majority of cases, "being an insect" means being born and promptly starving, being eaten alive, or dying in another horrific way.

It seems too early to take much action on insect suffering (besides research), but it is thought-provoking to wonder whether this trend is instead a merciful one.