Content Cafe

Policy

Extending safe harbour will threaten a healthy internet

by Dr George S Ford

The meteoric rise of the Internet has changed the way we communicate, conduct commerce, entertain ourselves and more. But an avalanche of unsavoury, dangerous and illegal online behavior has raised serious questions about how to continue nurturing the growth of a healthy Internet at a time when we need increased vigilance against illegal content.

Bad behaviour was tolerated in the Internet’s infancy as excitement about the future largely drowned out concerns about unintended consequences. This optimism has become embodied in the idea of “internet exceptionalism,” defined by Proskauer Technology, Media & Telecommunications Group co-head Jeffrey Neuberger as “the notion that the Internet is a special and unique communications medium to which special rules should apply.”

Perhaps no two laws better exemplify this notion than the “safe harbour” provisions contained in Section 512 of the Digital Millennium Copyright Act (“DMCA”) and Section 230 of the Communications Decency Act (“CDA”).

These laws have been described by their Silicon Valley champions, respectively, as ” bedrock principle of American jurisprudence”and “the most important law in tech” despite their relative youth. (The DMCA was enacted in 1998; the CDA in 1996.)

Section 512 of the DMCA limits the remedies available against certain online service providers for copyright infringement by their users – but only if the providers don’t profit from it and take specified steps to address infringement once made aware of it.

The CDA safe harbour provision, 47 USC §230, is even broader, immunizing service providers from claims based on content provided by their users, without any accompanying obligations.

The halo Big Tech has placed on these two laws is increasingly being questioned by a wide array of stakeholders, including lawmakers like Senator Rob Portman, who recently introduced a bill to amend Section 230 of the CDA to hold accountable websites that facilitate sex trafficking.

The Copyright Office is also reviewing the DMCA’s safe harbours, and Administration officials are grappling with whether or not to include provisions similar to CDA Section 230 and DMCA Section 512 in a renegotiated NAFTA – a position advocated by companies like Google and their trade associations.

What’s more, Australia is considering extending DMCA-like safe harbors to internet companies, whereas today they only apply to internet service providers (ISPs).

From an economic standpoint, the motivation for internet companies is clear: they wish to preserve their near blanket immunity from legal liability for any content posted to their services. Indeed, the Internet Association claims, unconvincingly, that eliminating safe harbors would “caus[e] the U.S. economy to lose 4.25 million jobs and nearly half a trillion dollars in the next 10 years.”

There is no price that can be placed on the pain and suffering caused by the horrors of human trafficking. Moreover, there can be no doubt that many industries are suffering because of safe harbour abuse. For instance, in a previous paper released by the Phoenix Center, I demonstrated that the DMCA’s safe harbors are costing the recording industry up to $1 billion annually in lost licensing revenue on YouTube alone.

For policymakers, balancing immunity versus liability for online content poses a difficult question. Everyone is fed-up with an increasingly dangerous Internet rife with bullying, intellectual property theft, racism, violent rhetoric, terrorism, and the worst kind of exploitation. At the same time, lawmakers must be mindful not to go too far.

To help policymakers as they consider the appropriate scope of Internet safe harbours, the Phoenix Center recently released a paper titled Fixing Safe Harbours: An Economic Analysis, offering a new economic model for safe harbors.

We find that the limit on liability afforded by safe harbour protection affects the evolution of platforms reliant on content posted by users. As we see it, the limited liability of overbroad safe harbors for these platforms promotes the success of platforms with high shares of illegal material – to the detriment of platforms that properly vet posted files for illegal materials and exercise a modicum of social responsibility. Put simply, vetting is costly, putting platforms with a conscience at an economic disadvantage in a competitive marketplace.

To address this imbalance, our model demonstrates that increasing the risk of liability on platforms that deliver illegal content to users will result in a “separating equilibrium,” with two types of platforms – those offering only (or mostly) legitimate content and those offering mostly unsavoury and low-value content.

Thus, the introduction of platform liability ultimately can be expected to change the structure of the platform industry by allowing socially responsible platforms that vet their content to thrive. At the same time, the platforms hosting illegal material would become easy targets for enforcement.

Safe harbours were implemented to help the Internet grow, but as cyberspace reaches the age of majority, it’s reasonable to expect more from internet platforms accepting any and all content, no small part of which is illegal or socially disturbing.

Clearly, imposing strict liability on online intermediaries for claims based on third-party content is not the answer; nor is absolute immunity. Weighing a proper balance between the two, our model indicates that the safe harbors as currently constructed in the U.S. are suboptimal in that they discourage socially responsible behavior.

By providing legal immunity predicated only on responsible behaviour, which was the expressed intent of the safe harbour provisions in the first place, we can usher in a new wave of innovation centered on creating platforms, products and services that encourage the best the Internet has to offer.

With the proper incentives in place, lawmakers can encourage the next Google and Facebook to help build a safer Internet that better reflects the legal and ethical standards we expect offline, and encourages new investment, creativity, and innovation.

Dr. George S. Ford is Chief Economist of the Phoenix Center for Advanced Legal & Economic Public Policy Studies, a non-profit 501(c)(3) research organization that studies broad public-policy issues related to governance, social and economic conditions, with a particular emphasis on the law and economics of the digital age.

Copyright © 2017 The Bureau of National Affairs, Inc. All Rights Reserved.

Share this story   FACEBOOK . TWITTER . EMAIL . SUBSCRIBE