Opinion

What Lawmakers Need to Do to Police Online Content

Vasant Dhar
Quote icon
At the current time, the neutrality and profit maximization objectives of social media platforms have turned their precision targeting algorithms into weapons that pose grave threats to open democracies.
By Vasant Dhar
Platforms with laudable mission statements of making the world a better place and doing no evil now find themselves dealing with the dark side of human nature associated with the connected world they have created. Their content can be malicious, aimed at covert manipulation or stirring up dark emotions that trigger violence. Platforms like Facebook and Twitter realize this danger and are scrambling to keep bad content off their platforms. They have employed armies of humans, roughly 20,000 in Facebook’s case, to police platform content.

While such an action is commendable and should placate lawmakers and the public for now, it won’t work as a long term solution. Rather, the solution must be algorithmic, and while implementing “morals as code” will be challenging, it is the cleanest way to think about how such platforms can be regulated without violating the first amendment.

Following the Pittsburgh synagogue shooting, many believe that platforms such as gab.com have “crossed a line” and should not be allowed to exist. But the more vexing question is, where is the line? How do we know whether an individual or platform has crossed it? Can machines help us find this line between okay and not okay, or is this an inherently human exercise?

Read the full article from The Hill.

___
Vasant Dhar is a Professor of Information Systems.