For seven days in January 2012, roughly 689,000 Facebook users unknowingly took part in an “emotional contagion” study. Facebook reduced the percentage of positive posts, negative posts, or both in select news feeds. A comparison group had posts omitted from their feeds at random. Reducing negative posts led users to give status updates with fewer negative words and more positive ones. Reducing positive posts had the reverse effect. Reducing all emotional content in a user’s news feed made the user post less.
Critics of the study claim Facebook didn’t get users’ informed consent, making the study unethical. The published study says users consented when they agreed to Facebook’s Data Use Policy. Even if the paper’s authors are mistaken, their defenders say informed consent wasn’t required.
Read on for a simple summary and critique of the informed consent argument against Facebook’s emotion experiment.
Below is the argument. The logic is valid. That is, the conclusion must be true if the premises hold. Do they?
- If Facebook didn’t get the informed consent of users, then Facebook’s emotion experiment was unethical.
- Facebook didn’t get the informed consent of users.
- Therefore, Facebook’s emotion experiment was unethical.
The study’s authors say (2) is false. Users gave informed consent when they signed up for Facebook. To create an account, users must accept Facebook’s Data Use Policy. As of July 2014, the policy (which, at 9,000+ words, is longer than the U.S. Constitution with all 27 amendments) states:
We may use information we receive about you…for internal operations, including troubleshooting, data analysis, testing, research, and service improvement. [emphasis added]
Forbes reports that the policy didn’t mention “research” until May 2012, months after the study took place. Further, critics contend that agreeing to the updated policy wouldn’t constitute informed consent as defined in the Common Rule for research on human subjects. For instance, the policy doesn’t describe “any reasonably foreseeable risks or discomforts to the subject.” Defenders of the study tend to accept that the Common Rule applies to Facebook’s emotion experiment. If it does apply, then (2) looks to be true. Informed consent wasn’t obtained.
But then defenders of the study reply that (1) is false. Informed consent wasn’t needed. The experiment, defenders say, met the Common Rule’s criteria for forgoing or altering informed consent. Chief among these criteria is that the research pose “no more than minimal risk to the subjects.”
According to the minimal risk defense, Facebook manipulates the news feeds of all its users all the time. The study’s authors write:
Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging.
The researchers claim their study’s news feed filter tested “whether posts with emotional content are more engaging.” The minimal risk defense maintains this test posed no more risk to users than the experiments they undergo regularly.
Assume that these regular experiments aren’t risky. Critics object that the emotion experiment was different. Manipulating users’ emotions is “creepy.”
The current reply is a two-point business as usual defense. (Credit for the name goes to Sebastian Deterding.)
- Facebook’s emotion experiment used A/B testing, a routine form of user experience research. This testing helps sites determine how to better satisfy visitors (which includes making us look, click, buy, read, or post more).
- Content that potentially—even intentionally—manipulates our emotions is ubiquitous online and elsewhere. A familiar example (cited by Michelle N. Myer of Wired) is the Sarah McLachlan animal cruelty video.
Both points are correct. However, it’s still not clear the emotion experiment was banal and benign.
First, content like the McLachlan video openly targets our emotions. We can recognize its intended effect and choose to disengage. The emotion experiment was conducted covertly. Plus, the published paper tells us that emotional contagion affects people “without their awareness.”
Second, if targeting people’s emotions is or may become the norm in online user research, that’s not a reason to accept it. It’s a reason to scrutinize it, just as we scrutinize psychological experiments by academic scientists. Even some defenders of the study (e.g., data scientist Brian Keegan) suggest Facebook be more open with users (and scientists) about any future research like it.
I just gave a brief overview of the informed consent argument against Facebook’s emotion experiment. Going forward, the question seems not to be whether Facebook users gave informed consent. The question is more likely this—Did Facebook need their consent and will it again?