The Internet Law Student Organization (ILSO) started a newsletter this semester for student members to write about anything technology and Internet law/policy. The following is the full version of a discussion with my colleague, Saad Malik, a 1L and student member of ILSO, about content moderation and the state of the Internet. My discussion is adapted from a recent talk I had the privilege of giving with my advisor and Internet law expert, Prof. Eric Goldman, on behalf of Santa Clara Law in Pasadena.
The ILSO Newsletter is open to any ILSO student members and offers a great opportunity to be published among your peers. If you’re interested in submitting an article, email firstname.lastname@example.org.
1L｜International & Corporate Law
Tech and Political Enthusiast
One of the biggest catch 22 for big tech is content moderation and the multifaceted issues that come with having a public-facing service. The tough balancing act that is allowing people to post as they please but making sure to not incite violence and adhere to the law is tricky business. The system is adequate in that people have tools to deal with harassment and as a general rule the heinous activities on the internet are buried and do not grace the algorithmic utopias of content service. However, there are plenty of glaring issues in the digital world with regard to content moderation. Such issues include seemingly arbitrary rules for the ability to make money on ads services, frivolous copyright infringement claims, governmental and political leaders sidestepping terms of service, and handling of falsified materials and service thereof. While this list is not exhaustive, these are some of the most prominent criticisms in the public discourse.
I will start with the problems plaguing Youtube, specifically the abuse of the internal copyright system and changes therein, and the lack of concrete rules on monetization. As it stands Youtube has made several strides to its handling of copyright claims and fixing the broad issues regarding entire videos being claimed for a few seconds of music that may or may not be protected under fair use. The problem is that fair use claims are not handled in a way that makes it easy for a content creator to argue a case without the use of a court and expensive court and lawyer fees. This leads to many creators either having to take a pay cut because of the high amount of effort needed to argue fair use in a court or all together leave the platform because of the lack of resources for dispute resolution. The absence of a steady rule system and arbitrary terms of service frustrate many of the content creators. Certain key terms and controversial topics are roadblocks to monetization making it difficult for journalism and general discussion of those topics. This has gotten so egregious that a popular Youtuber, Rob Dyke, had to change his last name to Gavagan to get monetization of his content. Youtube has found itself in a catch 22 where if more information is given about the algorithm then it can be exploited but if the information stream remains as is then there is no way to understand where people are making mistakes that cost them money.
Now to Facebook and Twitter with the plethora of criticism aimed at the social media giants. They have both stated claims to free speech with the obvious legal limitations. However, they have both received a significant amount of pushback in the realm of political speech. Twitter has publically stated that banning Nazi propaganda is difficult to separate from members of the GOP so it will not take action. While ignoring the obvious issue of the positions that the GOP may be taking, it is difficult to understand how and when the terms of service are applied. Despite brasen ignorance of the rules of Twitter many prominent politicians have avoided having their accounts banned for bad behavior. This has led to accusations of uneven enforcement of the rules which this is and claims that the policies are meaningless. Facebook meanwhile has had its own backlash specifically in the sale of ads and the curation of news. In this regard, Facebook has recently stated that it does not intend to fact-check political ads. After the scandals in the 2016 election, this action has eroded trust in the companies intentions to serve as a public place for discourse. Additionally, the company has seen many fair critiques on how the site aggregates news and fact checks content. They have claimed that the idea is to crowdsource fact-checking which can lead to the obvious scenario of repeating an idea so much that it becomes fact to many people who may have inadequate resources and/or the ability to verify for themselves. These problems are not easily solved nor is there a straightforward answer but for better or worse they are the gatekeepers to public news for many and they may need to do more to solve the obvious problems they have caused.
The problems of today are new and wide-ranging and the integration of the internet into daily life worldwide will have lasting consequences. The need for a fair system and ethical content moderation is becoming more apparent.
2L | Internet Law & Policy Foundry Fellow
There’s no sugar coating it – we don’t love the Internet anymore; or at least not as much as we used to. Everyday it seems we’re inundated with fake news, hate speech, harassment, data breaches, content removal, ads, bots, trolls, and whatever the next awful user generated flavor-of-the-week might be. With 2020 rapidly approaching, we’d think the Internet companies could have figured this out by now. Multi-billion dollar services like Facebook and Google house some of the world’s smartest engineers, lawyers, and policy wonks. So why can’t they get this right? Why does the Internet seem to be getting worse?
Perhaps, in evaluating the Internet’s perceived awfulness, we’re using the wrong gauge. Is the Internet getting worse or are we as a society becoming more creative in the ways in which we’re awful to each other in the offline world? Because if it’s the latter, then the Internet is really just a mirror on our society, reflecting the good, the bad, and the utmost heinous of the human condition. With that, we must consider whether this is purely a “big tech” or user generated problem. In many ways, it’s both; making the problem significantly more challenging and the solution utterly impossible.
When we think about the 90’s Internet, we don’t think about the Deepfakes and Nazis that plague our online communities today. We think the Internet is getting worse because we’ve convinced ourselves that the Internet used to be some sort of magical safe haven for communities to come together to chat on message boards, share MP3’s, trade dancing banana gifs, and meet new, interesting, online friends. We forget about, or for some of us, we aren’t even old enough to remember, the awful content that predates Facebook and Twitter. But the reality is that the Internet has always had awful content because society has always had awful people. And as long as awful content exists offline, it will continue to migrate online.
Take the infamous Internet law case of Ken Zeran, for example. Nobody knows why in 1995 some random Internet troll decided to attack Zeran on an AOL message board, stealing his identity and posing as a marketer of offensive T-shirts lauding the Oklahoma City bombing that took place only six days prior. Only the most depraved human mind could come up with some of the sickening slogans displayed on the merchandise, but Zeran took the blame after the troll displayed his phone number, opening the floodgates for death threats and harassment. It took AOL an absurdly lengthy amount of time to remove the posts, perhaps because at the time, they couldn’t be sure about about their risk of liability.
Today, we don’t see Zeran style attacks as often as they occurred in the 90’s. That’s because 47 USC 230 (Section 230), a law that essentially means websites are not liable for user generated content, encourages services like AOL to immediately act on that sort of content. Better, it encourages Internet companies to be proactive in preventing such content before it’s ever posted. So bad actors must adapt. And so begins a complex content moderation game of cat and mouse.
Of course, it’s naive to assert that only the users are to blame for the Internet’s awfulness. Among their data breaches and their recent decision to stop fact checking political ads, Facebook has proven time and time again that they will do nothing to improve the offline baseline of antisocial behavior. In fact, if anything, they’re actively lowering the bar. But Facebook is not the Internet and there is no way to artfully craft legislation that only targets Facebook without wreaking significant havoc on millions of other socially productive online services that tirelessly try to raise the bar.
So how do we solve this problem? For starters, we must first accept that we will never be able to. Content moderation and human bias inherently create a zero sum game such that every content decision the Internet companies make will create winners and losers; people that are happy with the decision and people that are not. That challenge will always exist. But what we can do is encourage Internet services to use the brilliant minds they employ and laws like Section 230 to innovate and iterate on transparent user-centric solutions, improve already existing filtering technology, like Content-ID, and to take bold, risky stands such as refusing to allow political advertising on their services.
We’ve come a long way since the 90’s both technologically and socially. Perhaps in the new decade, we’ll will find ways to improve our online world and rekindle our love for the Internet.