Senator Hawley’s Proposal to End Support for Internet Speech

Republican Senator Josh Hawley, an ironically self-described proponent of free speech, proposed the “Ending Support For Internet Censorship” Act this week, ultimately geared towards destroying the Internet and perpetuating censorship. As Prof. Mark Lemley put it, the bill should really be called the Promote Our Republican Nazis (PORN) act “since its goal is to promote hate speech and its effect is to make it harder to restrict pornography.”

This post is a recap of some of the glaring issues I’ve seen with the Hawley bill so far. For context, I suggest starting with Prof. Eric Goldman’s Linkwrap. I’ve been micro-blogging about the bill through Twitter threads here and here. I also highly recommend giving Daphne Keller’s thread a read too. If you’re wondering why you should care, check out my TEDx talk.

Covered Companies

To be considered a “covered company” under the law you must be a provider of an interactive computer service that, at any time during a 12 month period, retained more than 30,000,000 active monthly users in the U.S. OR 300,000,000 active monthly users worldwide OR had more than $500,000,000 in global annual revenue. If a platform falls into any of these buckets, it’s at risk of losing 230 immunity unless it proves to the FTC that it does not moderate content in a “politically biased manner.”

The targets are painfully clear: Facebook, Twitter, and Google. It’s fair that startups technically aren’t at risk of losing 230 immunity at this required threshold, however, big tech may think twice about acquiring any startups that pose a risk of being too politically biased. Most startups enter the market with the hope of cashing out on a big tech acquisition but if their platforms are anything like Gab’s, the dream may never become a reality.

I’m also curious where bots play into the MAU count. It’s too easy to create an “active” bot army to artificially push a site over the threshold.

Politically Biased Moderation

To receive FTC certification, the platform must demonstrate that its moderation efforts are politically neutral. Politically biased moderation includes moderation that is designed to negatively affect or disproportionately restrict or promote access to a political party, candidate, or viewpoint.

There’s no getting around the fact that neutral moderation is inherently impossible. Platforms like Facebook typically use a combination of AI and humans to moderate content, both of which naturally lend themselves to some form of bias. The algorithms used to make moderation decisions are programmed by teams of human engineers and therefore subject to some degree of human bias. The same can obviously be said for human moderators. There is no such thing as truly neutral content moderation.

Another interesting point raised by Prof. Goldman’s post from this morning about the Fyk v. Facebook case is that pretty much any moderation decision can eventually be tied back to a political motivation, whether it was intended to or not. In the Fyk case, Fyk appealed to his conservative audience, appearing on Fox and Friends, and eliciting support from the MAGA community. Almost instantly, a takedown decision involving pissing videos on Facebook became a conservative issue.

To make matters worse, the FTC allows for public input during the certification process. Users can lodge complaints any time they feel that a platform has exhibited such political bias adding validity to the “Diamond and Silks” of the world.

Moderate Defined

Under the bill, to moderate is to “influence if, when, where, or how information or other content provided by a third-party user appears on a covered company’s interactive computer service; or to alter the information or other content provided by a third-party user that appears on a covered company’s interactive computer service.”

The moderator’s dilemma describes the situation in which a platform is stuck between two moderation extremes. They could take the totally hands-off approach and watch the platform devolve into a cyber cesspool but incur no liability for any moderation decisions. Or, they could take the Club Penguin approach and over-moderate, incurring liability for any slips through the cracks. Here, platforms are damned if they do, damned if they don’t. If a platform removes content in a way that could be deemed as politically-biased, they’re in violation. But interestingly, a platform that opts to not moderate at all (hands-off moderation) could too be in violation if the lack of moderation influences the platform towards either side of the spectrum. Gab, though it fails to meet the user threshold, is the non-moderation example as a politically skewed platform. Although, with more and more conservative users migrating to Gab, I wonder how long they’ll fly under the threshold (I’ll be preparing my FTC complaint in the meantime).

The bill also includes algorithmic content moderation in its scope, through poorly defining an algorithm as “an algorithm or other automated process.” Side-note, laws should not be recursive. This means any platform that utilizes an algorithm to, for example, rank page searches, is in scope. So if your page about nazism isn’t ranked high enough in Google search, you should be prepping your FTC complaint too (and reevaluating your societal worth).

Business Necessity Exception

The bill offers a perplexing exception: moderation will not be considered politically-biased if it’s necessary for business or the information involved is not speech that would be protected under the First Amendment of the United States Constitution, there is no available alternative that has a less disproportionate effect, and the provider does not act with the intent to discriminate based on political affiliation, political party, or political viewpoint.

I don’t have much to say about this one. Content moderation is always a crucial business necessity in promoting the growth and value of a platform. Take a look at the SCU’s content moderation and removal at scale conference and you’ll quickly see why. If Facebook and Twitter allow themselves to degrade down to levels of Gab or 4chan, I’m leaving.

Constitutional Issues

Fortunately, it’s unlikely this bill will ever become a law given its glaring constitutional challenges. By controlling the type of content (speech) private Internet companies can and can’t host, the government clearly crosses the First Amendment’s compelled speech line. As NetChoice notes, websites would be required to host KKK propaganda just to maintain political-neutrality.

The bill also perfectly coincides with the recent SCOTUS opinion resolving the Manhattan Community Access Corp v. Halleck case:

“In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”

Hawley’s bill is akin to a modern-day fairness doctrine, a policy once vehemently opposed by Republicans, now applied to social media companies. Back in the day, the fairness doctrine was an FCC regulation requiring over-the-air broadcasters to cover issues of public importance in a fair manner. The doctrine was repealed in 1987 only to make an uglier return in 2019. If conservatives think that amending 230 will somehow bolster their speech, they’re in for quite the surprise. The natural response to such regulation is heavier restriction of all types of speech – including conservatives’.

For now, it may be wise to invest in some Chrome filters for pornography and hate speech. If Hawley’s bill miraculously passes, we’re going to need them.

One thought on “Senator Hawley’s Proposal to End Support for Internet Speech

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s