Why Algorithms Alone Can't Make the Internet Safe
— October 4, 2018
By Michael Posner
But now, as the heat rises, the Internet platforms have begun to acknowledge a measure of responsibility for the deleterious content that sits on their sites, a positive first step. But in responding, their first instinct is to revert to form, assuming that their engineers will create improved tools using artificial intelligence which will deal effectively with these challenges. Testifying about hate speech online before Congress in April, Facebook CEO Mark Zuckerberg reflected Silicon Valley’s reverence for machine-based solutions. “Over a five- to 10-year period,” he said, “we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”
In the coming days, researchers at the Aalto University in Finland, along with counterparts at the University of Padua in Italy, will present a new study to a workshop on Artificial Intelligence and Security. As part of their research, they successfully evaded seven different algorithms designed to block hate speech. They concluded that all of the existing algorithms used to detect hate speech are vulnerable to easy manipulation, contributing to, rather than solving, the problem. Their work is part of a project called Deception Detection via Text Analysis.
Read the full Forbes article.
Michael Posner is a Professor of Business and Society and Director of the NYU Stern Center for Business and Human Rights.