Opinion

Section 230: It’s Time to Equate Amplification to Publishing

Vasant Dhar
By Vasant Dhar
Until now, tech giants have been shielded by Section 230, essentially immune from the liabilities that traditional publishers hold. The House subcommittee’s hearing on December 1, aiming to hold tech companies accountable for the effects of their algorithms, is a baby step in the right direction but legislators need to take decisive action to protect its most vulnerable citizens from misinformation and harm. Starting immediately, companies should be held accountable for content that is amplified on their platforms.

In today’s increasingly digital world, the absence of any restrictions around data collection and its amplification now pose significant risks to society. There is mounting evidence of harm to certain sections of society.  Since the introduction of the “Like” button on Facebook, for example, the percentage of girls aged 12-17 who reported at least one major incident of depression over the last year has doubled from roughly 12% to over 25%. Between 2010 and 2014, hospital admission for girls 10-14 doubled. The evidence thus far is damaging, showing harm to teens on a massive scale, and the timing points to social media as the culprit, in particular, Instagram. The unintended consequences of a platform’s algorithms that exploit user supplied data can no longer be ignored.

It is true that platforms cannot possibly vet every piece of content posted; therefore, they cannot be held liable for user generated content. However, their immunity should cease when their algorithms amplify content that exacerbates a social pathology, such as crime, violence, sexual abuse and the growing problem of teen depression and tech addiction among young children.

What does it mean to amplify content?

Amplification occurs when a message is promoted based on data solicited from users. For example, consider a social media website that solicits user data via a “Like” button. If this data is used to connect strangers – thereby creating a social network – it is amplification. The information that flows along the created link is also amplification.

As long as the amplification doesn’t exacerbate an existing social pathology, such as teen depression, we can generally regard it as harmless. But when it does, it should no longer be immune to legal action. Treating data amplification as equivalent to publishing would still provide platforms adequate protection. “Passive platforms” like Snapchat that don’t amplify data would be immune, whereas those that amplify in a way that causes harm would be subject to investigation and litigation, for example, from the parents of teenage girls for causing them harm.

University researchers like myself are required to follow a lengthy bureaucratic process before we are permitted to conduct experiments on human subjects. We must demonstrate the application of ethical standards for the care and protection of human subjects. But no such restrictions apply to businesses conducting large-scale social experiments on individuals on the Internet. Treating data amplification as publishing would entail taking risk when platforms engage in social experimentation.

Finally, it is worth noting that there is a misconception that our data is used by platforms in exchange for “free products.” While, in a technical sense, this is true, the bargain is much more sinister. In reality, some platforms create the social network based on user data and send content along these connections to influence users; they also conduct extensive automated A/B testing and the resulting data is analyzed by armies of human experts to make their products increasingly irresistible. This occurs by making humansmore predictable, by reinforcing our craving for dopamine to maximize long-term engagement with the platform. Children are especially susceptible.

In 2010, after an hour-long interview with Steve Jobs by The New York Times about the new iPad launch, writer Nick Bilton changed the subject casually by asking Jobs, “so your kids must love the iPad?” Jobs’ response was telling: “they haven’t used it. We limit how much technology our kids use at home.”

Clearly, Jobs was aware of the potential risks of the technology to kids. Over a decade ago, Jobs saw that the short-term rewards which machines offer us are hard to resist, and can cause long-term harm. More than a decade later, the warning signs are flashing. The risks of inaction are significant, and waiting for the smoking gun could take decades and the risk of additional harm. But more importantly, the status quo continues to provide the wrong incentives around data collection and use to social media platforms.

The time of blanket protection by 230 is over. Data amplification must be regulated.
__

Vasant Dhar is Professor at the NYU Stern School of Business and the Center for Data Science, and host of the podcast Brave New World at BraveNewPodcast.com that focuses on the world that our future selves will inhabit.