Dealing With Disinformation: Facebook and YouTube Need to Take Down Provably False "News"
— March 14, 2019
By Michael Posner
Perhaps what is most telling is what his statement did not discuss—namely, whether Facebook would change its core business model. Today, the company draws more than two billion of us worldwide to its site, collects our personal data, and then uses this information to generate billions of dollars in advertising revenue each year. If we assume that this advertising model will not soon disappear, then Facebook and the other leading Internet networking platforms, Twitter and Google’s YouTube will continue to face the serious challenges posed by those who seek to promote deliberately false information on their sites, especially in the political realm. These platforms are selling user attention, or engagement, to those who advertise on their sites. They know that users are drawn to emotionally evocative content, especially that which is sensational and negative. Much of the disinformation online fits that bill and therefore is promoted by the platforms’ algorithms. Some of this disinformation is promoted by foreign actors like the Russian government. But more of it comes from a wide range of domestic sources, whose political orientation runs the gamut from right to left.
Facebook and these other companies have a mixed record in responding to disinformation. On the plus side they have stepped up their efforts to remove postings by the Russians, often because their source is disguised and because it disrupts our democratic discourse. They also are removing certain categories of harmful content, such as harassment or hate speech, whether from foreign or domestic sources. In addition, the companies have begun to take useful but still insufficient steps to combat deliberate disinformation, or “fake news.”
Read the full Forbes article.
Michael Posner is a Professor of Business and Society and Director of the NYU Stern Center for Business and Human Rights.