Because Internet law is still (somehow) considered a nuanced field of study in law school compared to intellectual property, crim, corporate, etc. it generates a lot of interesting conversation among curious friends, family, and prospective law students. Of those conversations, recently, there’s two topics that come up consistently thanks to their prominence in the media and politics: What does it mean to “break up big tech” and How are tech companies incentivized to engage in content moderation without regulations requiring them to do so? Both are excellent questions.
What does it mean to “Break Up Big Tech”
A colleague asked me this recently in response to probably one of the billion articles I’ve shared on Facebook or Twitter regarding this debate. While typing out my typical rant about Section 230 and the imminent doom that will be brought down upon the startup market, I realized there’s a lot more to this than can be addressed in a single text message. So I backspaced into “let’s talk about it in person.”
Since Elizabeth Warren’s recent proposal for regulatory intervention to “Break Up Big Tech” there have been thousands of op-eds in vehement opposition (understandable – it sucks). Personally, I’m a fan of Mike Masnick’s takes and usually send these two pieces along in response to this question:
Elizabeth Warren Wants To Break Up Amazon, Google And Facebook; But Does Her Plan Make Any Sense?
How To Actually Break Up Big Tech
But I also recognize that these articles may be a bit complex for a new reader in the Internet and tech policy community to dive into headfirst. So my answer will be a lot simpler and newbie friendly, but I highly recommend reading the two Techdirt pieces as a follow on.
I’ve boiled Warren’s proposal down to two key issues: 1. Social Media companies are social utilities and should be regulated as such. and 2. Internet companies should be regulated like oil and railroads.
Social Media Companies are Social Utilities
This concept is strange. Warren purports that social media companies should be designated as “platform utilities” completely separated from their users so that they can’t control both the platform and the user. Essentially, they’re “hands-off” hosts for users and their data. But, if they must moderate their users and user generated content, social media utilities must also meet certain undefined standards of fairness and non-discrimination when doing so. You can probably imagine the number of trolls out there that would argue their user account and content bans were unfair or discriminatory.
At the same time, Ted Cruz and pals have also called for regulations that require these platforms to maintain “neutral public forums.” So riddle me this: If social media companies can’t kick trolls off their sites because of #breakupbigtech but they also have to maintain neutral platforms, what has to give here? You can’t have both. Either you break up with your users and let your site become overrun with MAGA trolls or you strive for the impossible challenge of achieving true neutral through hyper-moderation and user bans.
But wait, there’s more! Every moderation decision the social media utility, broker, company, harbor, belvedere, makes that an “injured” user deems unfair, unreasonable, or discriminatory, would subject the Internet company to private lawsuits by said injured user. Read: Devin Nunes and Alex Jones could now have a legitimate case against Twitter. If you don’t think that’s a big deal, take a look at how many lawsuits Twitter alone receives regularly by disgruntled users upset about either being banned or having their content removed. Of course, right now, Section 230 allows for predictable outcomes in favor of platforms for such frivolous suits, but in Warren’s game – who knows?
#BreakUpBigTech is aimed to increase marketplace competition against the Facebooks and Googles of Silicon Valley, but ironically, all it’s actually going to do is make it harder for startups to start up. Warren’s proposal opens the floodgates for private litigation for which startups, normally, would have little concern given their protections under 230. But without such protections, they’re left completely vulnerable to expensive and time consuming suits that Google and Facebook can easily brush off. What’s the point of even trying to compete? It’s not worth it.
Internet companies should be regulated like Oil and Railroads
Anytime there’s an attempt to analogize the challenges of the online world to those of the offline world, the result is nonsensical over-regulation (the cringey anti-piracy “you wouldn’t download a car” meme always comes to mind). The Internet is a unique beast that lends itself to legal challenges that simply cannot be dealt with in the same way we deal with those in the offline world. Thus, comparing the need to break up tech companies like we did with the oil and railroad companies is already a step in the wrong direction.
Warren suggests a reversion of the Facebook/Instagram/WhatsApp merger such that Facebook, Instagram, and WhatsApp would be broken up into smaller, tightly regulated, independent companies. Of course, the separation would be annoying to Facebook (and Google – consider all of the “Bets”), but that’s pretty much it. Facebook and Google have enough users and revenue as it is that breaking them up would 1. do probably very little to nothing overall and 2. result in crappier products with different and confusing usability standards for us users. No doubt, Google is a powerful company, but at least I know what I’m getting when I use a Google product. Whether it’s maps, or docs, or drive, I generally know how it’s going to work even if I’ve never used it before. And beyond that, I know it’s going to work well. We can kiss that familiarity and technical guarantee goodbye if we break everything up. Worse, imagine having to pay a monthly subscription to use Maps or Instagram? And don’t get me started on the inevitable data portability nightmare (perhaps that’s a separate discussion).
In sum, Warren’s proposal is enticing especially since hating on tech companies is trendy in the Valley. But as attractive as these anti-tech movements are, it’s important to think critically about the consequences. The users almost always lose.
How are Internet Companies Encouraged To Moderate Content if they aren’t Forced to do so?
It seems totally counter-intuitive right? We’re just supposed to trust Facebook and Twitter to moderate the trolls and nazis of the Internet even though there’s no law requiring them to do so? Yes, and the reasoning is best explained by two important cases that make up Section 230’s history: CompuServe and Prodigy (If you’re taking Internet Law at SCU this Fall, you’ll need to know these two cases anyways so congrats on getting ahead).
CompuServe
1991. CompuServe was the host of an online news forum in which its content came from third party sources. Among the web newsletters hosted by CompuServe was one ironically named “Rumorville.” You can probably see where this is going. Rumorville, of course, published defamatory information about a competitor which ultimately led to a defamation suit launched by the competitor against CompuServe. CompuServe wins. Why?
Simply because CompuServe did not engage in content moderation. The service actively chose not to filter any of the third party content hosted on its site and therefore there was was no way CompuServe could have known about the defamatory postings. The court held that by taking a backseat to regulating their platform, CompuServe was a mere content distributor instead of a publisher and therefore they couldn’t be held liable for Rumorville’s defamatory postings.
Cool. Makes sense right?
Prodigy
1995. Prodigy was another online service that offered its subscribers access to news, weather, bulletin boards, etc. Among those services, Prodigy hosted a bulletin board called “Money Talk” which an anonymous user used to post defamatory information about a competitor (sounds familiar?). Competitor gets pissed and sues Prodigy. Prodigy loses. Why?
Here’s the difference: Prodigy, unlike CompuServe, actively engaged in content moderation in order to create and maintain a family friendly online service. Specifically, Prodigy was active in creating community guidelines, enforcing those guidelines, and even went as far as to deploy filtering software that screened and removed offensive language. Because Prodigy took on an active editorial role, the court decided they were more like content publishers instead of distributors (like CompuServe). Content publishers were liable even for third party content.
The Moderator’s Dilemma
After Prodigy you had two options as a website owner: ignore your platform, make no content moderation decisions, allow it to run rampant with trolls and pornography but hold no liability as a content distributor. OR maintain a family friendly environment, moderate aggressively, and hope nothing falls through the cracks as a content publisher. The first option would most likely scare off users, while the second lends itself to extreme censorship. Both options suck and what we actually want is for platforms like Twitter and Facebook to take an active role in filtering out the trolls and outrageously offensive material while promoting diverse, open, and global communication across the platform. This happy medium was created thanks to Section 230 (i.e. websites are not liable for third party content).
So by removing the threat of litigation, websites are actually encouraged to make good faith content moderation and filtering decisions because it’s in their best interest as a platform with a healthy user base to do so. For example, you’d probably leave Facebook if every day your timeline was filled with graphic violence and pornography. It’s in Facebook’s best interest to follow the Prodigy approach, while it’s probably in 4chan’s best interest to follow CompuServe’s. Whatever the moderation approach, the point is, it’s up to the platform to decide what works best for their users – and they’ll do so naturally without regulatory pressure.
Coming full circle, you may see now why Warren’s proposal to regulate how platforms deal with their users reignites the moderator’s dilemma 230 originally extinguished – at that point it’s easier to not deal with the users at all and hand the platform over to the trolls rather than figure out what “fair, reasonable, and anti-discriminatory dealing” means.