Posts

Showing posts with the label Center for Human-Compatible Artificial Intelligence

Is there evidence that recommender systems are changing users' preferences?

In Human Compatible , Stuart Russell makes an argument that I have heard him make repeatedly (I believe on the 80,000 Hours podcast and the Future of Life Institute conversation with Steven Pinker). He suggests a pretty bold and surprising claim: [C]onsider how content-selection algorithms function on social media... Typically, such algorithms are designed to maximize click-through , that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user's preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on... Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user's mind—in order to maximize its own r