The Trouble of Conducting “Experiments” in Social Media

Caveat of What you see and Post in Facebook: Perils of Unsupervised Studies on human emotion #

Kramer et al (2014) recently published an article, “Experimental Evidence of Massive Scale Emotional Contagion through social networks”, present a breach of human study ethics? Here is the link to the full text:
http://goo.gl/2vafXH

The purpose of the study was to test whether exposure to emotions led people to change their own posting behaviors. In particular, the investigators were interested to learn whether exposure to emotional content led people to post content that was consistent with the exposure—thereby testing whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion.

For this, the investigators set up two parallel experiments for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was reduced. Thus, when a person loaded their News Feed, posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and 90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing. This manipulation was controlled using a third party software LIWCS, so that the researchers would be blind and impartial as well as impervious to the words being seen. This was perhaps done with a view to maintain confidentiality, but note, that reliance on this software might compromise ethics of the research as this is a proprietary software, no one knows how this was actually done.

The authors stated the following assumptions:

  1. “Emotional contagion occurs via text-based computer-mediated communication”
  2. “Contagion of psychological and physiological qualities has been suggested based on correlational data for social networks generally”
  3. “People’s emotional expressions on Facebook predict friends’ emotional expressions, even days later”

These are major, relevant, issues. A lot of people engage in facebook on a daily basis, others less so. Depending on who was picked up for the study would depend on what kind of results came out. Clearly, people derive their emotional inputs through a variety of sources, not just Facebook feeds, but also from other social network interactions and face to face interactions.

So, they decided to manipulate what people would view on their facebook feeds. As they wrote, “… News Feed is the primary manner by which people see content that friends share. Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging”

One wonders who would have approved such a study in the face of evidence that Christakis et.al., have already provided some years ago on emotional contagion in social networks? I could not find that information in the article.

I’d say this is where it gets a little problematic with respect to normative ethics of maintaining anonymity, beneficence and confidentiality. Clearly, Facebook engineers played with the confidentiality of their users.

Here’s why. In the absence of any “algorithmic manipulation”, you’d expect everything to flow into your newsfeed as you open each instance of your facebook page every time, irrespective of who you subscribe to. It is obvious it does not happen that way. There are hidden filters and algorithms that track your browsing patterns that none of us perhaps know of. These end up pushing advertisements that we may or may not like. It is one thing to issue implicit consent to participate in areas where usability are concerned with a view to improve user “experience” of interacting with the medium, it is actually quite another to “play with someone’s emotional state”. It is quite another, given that our emotional state as measured by your emotional outputs (based on machine learning) and a “contagion” effect does happen (in addition to whatever interactional effect we may have), undeniably people’s emotions were “played with”. Given this, it is impossible to rule out play of emotional harm to people, for however small time period or instances there may be. Note where the researchers state,
“… emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks” (pp 1), and this,
“ (the researchers) manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed”.

Albeit, that the researchers were blinded to who the users were, but what is missing here, is the vexing question: was this done with the users’ explicit permission?

In other words, this intentional, yet surreptitious variation or manipulation of the emotional content of news feed with the knowledge that contagion effects may exist and attempt to characterize it without explicit permission of the users themselves runs contrary to what most ethical approval committee in the world would agree.

Here’s what they found:

Screenshot 2014-06-30 10.47.49.png

Figure. The graphical summary of the effect of seeing positive and negative emotional content News Feed on your Facebook feeds on your own emotions and outputs.

The researchers further assert,
“These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks, and providing support for previously contested claims that emotions spread via contagion through a network”

and that,

“News Feed content is not directed toward anyone, contagion could not be just the result of some specific interaction with a happy or sad partner”

Note that, “… If affective states are contagious via verbal expressions on Facebook (our operationalization of emotional contagion), people in the positivity-reduced condition should be less positive compared with their control, and people in the negativity- reduced condition should be less negative”.

The study was robust, experimental, and it certainly provided enough evidence for their initial assertion about contagion effects of Facebook posts, and indeed any text based post for that matter. In there, it still raises the ugly head of whether it is kosher for a study, untempered by a ethics approval to be conducted and reported in public domain. How were protections against potential emotional harms guarded against? Were the people involved in the study explained the purpose of their participation?

This is particularly relevant as we get to see connections between emotions and physical well-being suggests the importance of these findings for public health. Online messages influence our experience of emotions, which may affect a variety of offline behaviors. And after all, an effect size of d = 0.001 at Facebook’s scale is not negligible: In early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day. People are known to commit suicides after being “bullied” or maltreated in “facebook posts”, and these are at least, shown to be not fads in any way.

At the end of the day, this is a brilliant and landmark study that will be cited many many times in future. And therein lies a trouble. There was no ethical approval either sought or reported. And it mattered.

 
0
Kudos
 
0
Kudos

Now read this

Quiz answers

Answers to the Quiz on Evidence Based Health (with answers) # Q1. Which of the following activities is the best example of evidence based health practice? A heart surgeon conducted a heart transplant surgery (a surgery is an action, it... Continue →