Back to top

It has been well over one year now since Facebook enabled its almost two billion users to stream live video. During the roll-out, Facebook unabashedly encouraged its users to embrace the opportunity to “create, share and discover live videos.” Unlike Twitter, which required that users download the Periscope app separately before they could livestream, Facebook offered a fully integrated streaming functionality. (Twitter has since been working to eliminate its extra roadblock.)

Using Facebook Live is as easy as pushing the “Live” icon on the Facebook app. First-time users are greeted by a short set of basic instructions – which they can skip – explaining how to get started, the workings of the view counter and how to interact with viewers. Other than a cheerful reminder that reads, “Remember: They can see and hear you!” there is no alerting users to the ethical minefield that unfolds when livestreaming video.  Instead, the sign-off message reads, “Try it! Just relax, go live and share what’s happening.” What could go wrong?

When livestreaming apps such as Periscope and Meerkat first burst onto the scene a couple years ago, journalism professionals embraced their potential but also engaged in thorough debate about the ethical pitfalls of these apps. Professionals trained and experienced in the moral questions presented by broadcasting live footage to large audiences saw the need to examine the potential harm posed by this technology.  Yet, Facebook developers trust teenagers to figure out the harms on their own, through sometimes costly trial and error.

According to Mark Zuckerberg, Facebook Live marked a “shift in how we communicate, and it's going to create new opportunities for people to come together.” There is no doubt that Facebook Live has done exactly that, as it has produced it predictable parade of viral stars and shareable content.  But Facebook live has also been used to broadcast murder,  torture, rape, beatings and other violent content, presenting some serious ethical concerns.

My point is not that the technology caused these events, or even enabled them. This type of dead-end ethical analysis is highly speculative and amounts to blaming technology for heinous acts committed by individuals. The ethical analysis does not end there. As the platform on which these videos are posted, Facebook aids in distributing this upsetting content, especially because the content remains available after the livestream has ended if a user chooses to post it. (On Instagram, by comparison, live videos used to disappear once the recording has stopped.)  This is a choice that was made by the developers at Facebook, and it’s one that carries moral weight, as it gives these acts (and their actors) a notoriety they would otherwise lack. While the availability of this disturbing content raises a smorgasbord of ethical concerns for its creators, hosts, moderators, audiences and subjects, I want to narrow the focus here to one particularly troublesome type of content: suicides that are livestreamed.

The Livestream Suicides

In recent months, several people broadcast their suicides on streaming services.

  • On April 4, 2017, a 24-year-old college student jumped off a 19-story hotel in Mumbai, India.
  • A 14-year-old Miami teen hanged herself last January from her shower door.
  • One day later, an aspiring 33-year-old actor shot himself in his car in Los Angeles.
  • A 12-year-old girl from Georgia hanged herself from a tree in her front yard. (This was streamed on live.me, not Facebook Live.)
  • A 49-year-old Alabama man shot himself in the head on April 25.

As with other situations, we cannot assess the role the presence of livestreaming technology played in the tragic decisions these people made. Experts warn not to attribute suicide to a single cause. Even if we could somehow demonstrate that the existence of livestream functioned as a trigger in one case, there might be separate instances in which the livestreaming allowed others to see the cries for help and intervene.

Facebook has taken some laudable initiatives regarding this issue. It has an ongoing partnership with reputable suicide prevention programs that work on identifying and reaching out to users displaying suicidal thoughts.  It is even contemplating the use of artificial intelligence and pattern recognition to flag content indicative of suicidal tendencies. In the wake of the recent suicides, the social network announced it would extend these measures to its livestreaming function. However, this was not an unforeseeable problem, and one can’t help but wonder why it took a number of people taking their lives before Facebook would take this step.

Compounding the ethical quagmire is the fact that Facebook tends to be slow in removing these types of videos. It took Facebook two hours to remove the video of “Facebook killer” Steve Stephens murdering Robert Godwin. (Contrary to some initial reports, this video was not livestreamed; it was uploaded after the fact.) When the suicide video of the 12-year-old from Georgia started making its rounds on Facebook, the company denied initial requests to remove it, according to a Buzzfeed report. Kyle MacDonald, a New Zealand-based psychotherapist, experienced a similar sluggish reaction when requesting removal of links to the suicide video. In the opinion pages of The Guardian, he took Facebook to task: “Facebook also claimed that because it is not hosting the video, it is not responsible,” he wrote. “This is despite the fact that due to its inaction the links were widely available on Facebook for anyone to see long after I reported the problem. It has not been verified that the video is authentic but whether it is or it isn’t, the content of the video shows a child committing the most serious act of self harm and is not appropriate for public viewing.” According to the New York Daily News, the video of the Alabama man committing suicide stayed up for two hours and generated more than 1000 views. A recent video of a Thai man killing his 11-month-old daughter before taking his own life stayed up for 24 hours. Because of livestreaming, Facebook has at times become a platform for snuff movies usually confined to the dark recesses of the internet.

Most journalism organizations generally won’t report on individual suicides unless they are newsworthy, and if they are, journalists follow a set of guidelines developed by experts in the field of suicide prevention and reporting. These guidelines include the stipulations that the method used by the suicide victim should not be disclosed, the word “suicide” should not be used in the headlines about individual suicides and coverage should not be prominent or extensive. While one arguably could find examples where these guidelines are not followed, most responsible news organizations tend to abide by them.

Why? Because experts have established that suicide is contagious, in a sense – one suicide can prompt others to harm themselves. Irresponsible media coverage is one of the contributing factors to this so-called contagion.  While I am no expert in the subject matter, the graphic and realistic depictions of peers committing suicide seem to combine all the elements that experts agree should be avoided. They present the act as a way out for a troubled person with whom they might identify, they generate considerable media attention, they show the method used in great detail and they lack context. In other words, this content puts people struggling with suicidal inclinations at risk in a very direct and tangible way.

In February, Zuckerberg addressed the problem, claiming that future artificial intelligence could help detect troublesome content in the long-term. But for the time being, he said, it will be up to the Facebook community to provide a safe environment. This response does not cut it ethically. Facebook and other social media platforms have not caused suicides, but they are responsible for the suicide videos being captured by their technology and distributed across their networks. Moreover, Facebook has not been successful in removing this dangerous content in a timely fashion. This issue cannot be addressed by yet-to-be-developed technology.

Here is what I believe Facebook ought to do:

  1. Even though this would not address the issue of the streamed suicides directly, first-time users of Facebook live should be subjected to a tutorial in which the ethics of livestreaming are addressed.
  2. Given the young age of some of the suicide victims and the fact that young people in particular are vulnerable to contagion, age limits for the use of Facebook live should be considered.
  3. There have been numerous examples of people committing suicide on livestream and these videos being available for a considerable length of time afterward. These images cause harm. Until a more reliable system is developed to detect this content sooner, live videos should not be able to be shared or posted during the video or after they have ended, until it has been established – through human or artificial intelligence – that they do not contain images of self harm (or other disturbing content).

The technological and economic feasibility of these suggestions can be questioned.  But the approach taken so far by Facebook – and tech companies in general – has been to release technology first and worry about ethics later. (This approach led Donald Heider, founder of the Center For Digital Ethics & Policy, to argue that Facebook should hire a chief ethicist.) But when human lives are at stake, it might be time to switch this modus operandi.

Bastiaan Vanacker
Bastiaan Vanacker
Program Director

Professor Vanacker is the Program Director for the Center for Digital Ethics and Policy. His work focuses on media ethics and law and international communication, and he has been published in the Journal of Mass Media Ethics.

He is the author of Global Medium, Local Laws: Regulating Cross-border Cyberhate and the Editor of Ethics for a Digital Age.

 

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.