It’s now a familiar trope in Hollywood–a politician is blackmailed by terrorists who claim they will post a video of a decapitation or some other type of violence against a victim on social media, usually YouTube.
Violence captured and shared on social media: this content tends to become quickly viral, and is difficult to contain for social media platforms that host user generated content. Today, Facebook is increasingly feeling the heat, most recently when Steve Stephens, a Cleveland native, posted a video of him randomly shooting and killing an innocent victim, Robert Goodwin, that was viewed over 1.6 million times before the video was pulled by Facebook more than two hours later.
Last month, it was a gang rape in Chicago that was streamed on Facebook Live. In January, another similar incident in Sweden was streamed on Facebook Live. Torture of a man with disabilities, child abuse, and suicides have all been streamed on Facebook as well as its subsidiary, Instagram.
Facebook’s typical response to these events involve: taking down content as quickly as possible, an emphasis that the company doesn’t condone this type of content, and a promise to do better.
It seems that the bulk of Facebook’s responses have focused on improving its internal operations and technology, in order to reduce the time from when the content is uploaded to when it is reported, to when it is taken down. Facebook has started exploring using artificial intelligence to prevent questionable content from being shared.
Yet, the challenge of dealing with violent content on social media is not new news. YouTube similarly has had disturbing violent incidents or videos posted, where suspects discuss their intentions for mass shootings. The Syrian Civil War has also led to the uploading of mass violence on YouTube.
Here is a look at some key events that have happened on social media in the past decade:
While there is almost no way to capture a complete picture of all violent events on social media, it’s clear that with the launch of Facebook Live, the violence has become more real-time, and perhaps more varied. In the era before Facebook Live launches, most videos of violence are related to international crises, where different interest groups are using YouTube as a communication channel for propaganda. The videos of police violence against African Americans in 2015 also showcase how video sharing has changed between then and now: most videos are released significantly after an event occurrence, and their dissemination is still controlled by news media, police, etc. Perhaps because of this, most of the criticism launched at YouTube has been around the difficulty of implementing an effective policy that filters out inflammatory content yet protects the freedom of speech.
In contrast, today’s violent content is easily controlled and disseminated by the perpetrators themselves. This shift is seen as largely thought to be driven by the fact that “The attention from online peers, combined with immediate feedback in the form of comments, reactions and shares, can be intoxicating. The fact that the footage is self-incriminating doesn’t matter to some offenders,” the Guardian claims.
Yet, it’s important to consider whether all violence on social media should be banned. The timeline above shows some key video content that have been critical in spreading public awareness about issues such as police brutality, or mass violence in the Syrian Civil War. The societal importance from public access to such violent content cannot be understated.
Where does that leave users? Unless social media companies develop more automated solutions to identify violence that is purely criminal, and does not have any societal benefit, there will likely be more violent events covered on Facebook Live or otherwise. Understandably, defining what has “societal benefit” is a tricky line to define–and one that will involve a strong hand of company-driven curation, which historically companies such as Facebook and Google have been reluctant to pursue.