During my 1L year, I received a piece of advice from a partner at one of the many firms around here: “Take notice of the things you enjoy during law school. It will be rare, but it could lead you to your career.”
I couldn’t help but think back to that piece of advice as I was staring at the whiteboard of madness I created at 2am last night for a paper that’s only worth a pass/fail grade for a class that probably doesn’t require as much as effort as my bar classes.
And. I. Loved. It.
I haven’t had that moment yet in law school. Of course, if you know me, you know I’m passionate about Internet law and policy and I can’t go 10 minutes without talking about Section 230 – but hell, I still took notice last night. This assignment granted me some much needed reassurance.
We were required to download the Harvard Case Study: Algorithmic Allegories (free download here) which touches on the legal and ethical issues arising under the infamous Facebook Emotional Contagion Experiment. The study then presents a series of hypos with prompts to respond. Each hypo is relevant to the Emotional Contagion Experiment with slight variations or twists. Our job was to pick one and conduct a legal or ethical analysis.
I chose hypo #4 because I was already fascinated with the contagion experiment and #4 gave me the opportunity to continue thinking through the original study in addition to the ethics of content manipulation and experimentation on users. As a passionate advocate for the Internet and tech, the hypo presented a challenge: how do I fine tune my ethical compass in a way that allows me to sleep at night knowing I’m fighting for what many would deem the evils of big tech. If you’re unfamiliar with the experiment, read the case study first and you’ll see what I mean by this.
Note, I purposely chose to avoid the privacy discussion here. To be candid, privacy is uninteresting to me and not where I wanted to focus my analysis. Not to mention there’s also a 1000 word limit.
—
Hypothetical Case Four: A Controlled Facebook Study of Emotions
Preliminary Considerations
There are several legal and ethical considerations regarding A/B testing and user-profiling; all of which are outside the scope of this evaluation. It is generally accepted that A/B testing and user-profiling, for the purposes of improving a website (like Facebook), is a necessary evil for Internet services to succeed. A/B testing is typically excluded from the “Common Rule” prescribed by the Department of Health and Human Services and user-profiling is generally acceptable per the user’s continued use of the service.
Instead, this evaluation will assess the ethical implications of the tests using Utilitarian and Kantian based theories to answer the following: (1) is the continued use of emotional based curation acceptable for users in the A/B subset; and (2) is it acceptable to profile and manipulate News Feeds of users outside the subset.
Addressing Scale
In conducting this evaluation, there are two groups to consider with regards to scale: the subset of users who originally responded to the A/B tests and the remainder of the user-base.
The subset offers a low-risk, low-reward advantage. Assuming a sample size of 700,000 users, roughly 0.035% of Facebook’s entire user-base will be affected. Thus, testing will likely go unnoticed. Further, if the tests are successful (i.e. more revenue is generated as a result of more time spent online), Facebook can quietly leverage a slight revenue increase if it were to continue its emotional content curation just within the subset. From a business perspective, this outcome is favorable. Ethically, however, it’s questionable. Under a Utilitarian evaluation, at 0.035%, any resulting societal good will not significantly benefit society, much less Facebook’s user-base. Under the Kantian approach, quietly profiting on emotional manipulation simply uses people as a means to an end (i.e. Facebook’s bottom line).
Alternatively, extending the test to the remainder of users offers a high-risk, but significantly high-reward opportunity. Manipulating the News Feeds of most or all of Facebook’s users based on emotional tenor will not go unnoticed thus establishing a high-risk for backlash similar to that of the original contagion study or the infamous Cambridge Analytica scandal. From a business perspective, considering Facebook’s current reputation, the public relations risk might outweigh the potential profit. However, the ethical question is more interesting as it turns on whether Facebook is inherently a “social good.” Under Utilitarian ethics, if Facebook is a social good, then a significant increase in advertising revenue helps to improve the service and thus benefits the greater good. Under Kantian ethics, the general betterment of society as a whole is the end to which the users are merely a means.
Evaluating whether the tests will result in a social good largely depends on what type of content, positive or negative, users skew toward. Considering the successful outcome of the original contagion study, this evaluation of skew is critical for anticipating the end state of the service upon conclusion of this study.
Majority Positive Skew
Assume the majority of users naturally spend more time on Facebook as a result of increased emotionally positive content. If the algorithm increases positive posts for the majority and thus overall decreases negative content on the service, perhaps an incentive for users to generate more socially productive and less antisocial content is indirectly created. Less antisocial content encourages more healthy conversation and benefits the entire online community and society as a whole. Additionally, increased revenue as a result of more time spent online could then be used to further improve the service thus complementing both the Utilitarian and Kantian theories.
The trade-off, however, is that a majorly positive skew is likely to result in a “rose-colored-glasses” effect such that the majority of Facebook’s user-base will miss out on pertinent information creating an uninformed and biased echo-chamber. Consider, for example, the suppression of content relating to mass shootings, immigration, and climate change. Echo-chambers do not create a social good.
Majority Negative Skew
More troubling, assume the majority of users naturally spend more time on Facebook as a result of increased emotionally negative content. The question then becomes why are users more attracted to negative content and should Facebook encourage this behavior by decreasing the availability of positive and healthy content in exchange for adverse and expectedly pernicious content.
Though a negative user-base could be deemed more informed than the “rose-colored-glasses” base, it is ethically bankrupt for Facebook to purposely create a breeding ground for antisocial behavior and worse, use profits from said breeding ground to further degrade the overall experience for its users and ultimately, society.
Neutral Skew
Lastly, assume a 50/50 skew such that some users spend more time on Facebook as a result of emotionally positive content while others do so as a result of emotionally negative content. Perhaps this is just Facebook at its current state. However, the introduction of emotional manipulation would then create a polarizing effect on both sides driving a large wedge between two incredibly volatile echo-chambers. The positive group might view Facebook as a social good while the negative group might view Facebook as a cesspool. Such volatility not only further damages the reputation of the service overall but also potentially tees up disastrous legislation from eager regulators.
Conclusion
Answering the two questions presented, (1) it is not ethically acceptable for News Feeds to continue to be curated by emotional content for the subset of users who responded to the tests if the only reason to do so would be to use the small scale to quietly and insignificantly profit off “successful” users; and (2) considering the potentially ruinous effects of both the negative and neutral skews, it is not only acceptable but imperative for Facebook to conduct extensive profiling and research before implementing such algorithms at scale.
From a purely business analysis, the success of the original contagion study suggests an advantage for Facebook to maximize revenue. However, whether Facebook should exploit this advantage depends on whether providing a social good or improving their own public perception is a current priority.
—
For more background on the Emotional Contagion Experiment, I read Mike Masnick’s post about it here.