Posts

Showing posts with the label artificial intelligence

Is there evidence that recommender systems are changing users' preferences?

In Human Compatible , Stuart Russell makes an argument that I have heard him make repeatedly (I believe on the 80,000 Hours podcast and the Future of Life Institute conversation with Steven Pinker). He suggests a pretty bold and surprising claim: [C]onsider how content-selection algorithms function on social media... Typically, such algorithms are designed to maximize click-through , that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user's preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on... Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user's mind—in order to maximize its own r

Why Can't Steven Pinker and AI Safety Altruists Get Along?

Image
There are few bo oks that have more influenced to my thinking than Steven Pinker's The Better Angels of Our Nature . The book makes a powerful case for effective altruism by showing that much of what effective altruists try to spread—reason and empathy, chiefly—has led to a sweeping decline in virtually every form of human violence over the course of human history. At the same time, I think that Pinker's thesis and evidence in that book are compatible with an understanding that tail risks to human civilization, such as catastrophic risks, may have increased, and animal suffering, has clearly increased in recent history. (Humans' moral views on the latter do clearly seem to be improving , thoug h.)  I've found it puzzling, then, that to coincide with the publication of his book Enlightenment Now , Pinker has been publishing multiple articles criticizing altruists who are focused on addressing long-term risks, primarily from artificial general intelligence. Pinker

Is a Computer Neuron the Same as a Brain Neuron?

Image
When I took a philosophy of mind class in high school, my professor proposed neural networks in computer science as a potential way to create consciousness. At the very least, it's a way to create high levels of intelligence. I didn't know exactly what a computerized neural network consisted of (I imagined it being built in hardware), and I still don't, really, but I'm curious: how similar is an artificial neural network to a biological one? Is it really a good replication? From an article on the similarities and differences :  An [Artificial Neural Network] consists of layers made up of interconnected neurons that receive a set of inputs and a set of weights. It then does some mathematical manipulation and outputs the results as a set of “activations” that are similar to synapses in biological neurons. While ANNs typically consist of hundreds to maybe thousands of neurons, the biological neural network of the human brain consists of billions. On the other han

Things I've Changed My Mind on This Year:

Image
1) The importance of artificial general intelligence: I'd previously been dismissive of superintelligence as being something altruists should focus on, but that was in large part motivated reasoning. I read books like Superintelligence and Global Catastrophic Risks , and I knew their theses were right initially but would not admit it to myself. With time, though I came to see that I was resisting the conclusion that superintelligence is an important priority mostly because it was uncomfortable. Now I recognize that it is potentially the most important problem and want to explore opportunities to contribute. 2) The economic argument for animal welfare reforms: One of the reasons often given for supporting animal welfare reforms to those who want to see fewer (read: no) animals tortured for food is that welfare reforms make the industry less profitable, cutting down on the numbers of animals raised. I did not think this effect was strong enough to be worth the effort act