After Facebook-Broadcast Killing, the Company Reviews its Content Reporting System — As it Desperately Needs To


Following this weekend’s horrific news story of a murder broadcast via Facebook, Justin Osofsky, Facebook’s Vice President of Global Operations, released a statement yesterday. He said the killing “goes against [Facebook] policies and everything we stand for.” Though mega-corporations that have monopolies over the way people consume most of their information are inherently ethically compromised (if not just evil), we can agree that murder isn’t generally a company policy. No big news there. But beyond the need to distance the company from the violence that showed up on his platform, Osofsky’s statement raised some larger questions about the unbridled freedoms of new technologies.

On Sunday, a man named Steve Stephens seems to have randomly selected a person — 74-year-old Robert Godwin Sr., a retired foundry worker, father of 10, and grandfather of 14 who’d just left an Easter celebration with his family — that he saw walking down a sidewalk in the Cleveland, OH area. He approached him, asked him to say the name of a woman allegedly affiliated with Stephens, then shot him dead, recording it all. Stephens killed himself today, CNN reports, after a “brief police chase.”

Originally, reports said Stephens’ video had been broadcast live via Facebook, but, according to the New York Times, Facebook countered that while the post had indeed been put on their site by the perpetrator, it was not live. (He’d made other Facebook Live videos that day, however.)

As CNN notes, after the videos had already spread, a certain Ryan Godwin wrote on Twitter (to which the video had also spread), “Please please please stop retweeting that video and report anyone who has posted it! That is my grandfather show some respect.”

Facebook ultimately deactivated Stephens’ account Sunday afternoon, and deleted the videos he’d made, in which he’d also declared, “I’m at the point where I snapped,” and cast blame for the mental state that apparently led to his ruthless, random killing onto his mother and ex-girlfriend.

Osofsky said in his statement (h/t The Wrap, where you can read it in full):

As a result of this terrible series of events, we are reviewing our reporting flows to be sure people can report videos and other material that violates our standards as easily and quickly as possible. In this case, we did not receive a report about the first video, and we only received a report about the second video — containing the shooting — more than an hour and 45 minutes after it was posted. We received reports about the third video, containing the man’s live confession, only after it had ended. We disabled the suspect’s account within 23 minutes of receiving the first report about the murder video, and two hours after receiving a report of any kind. But we know we need to do better… Artificial intelligence, for example, plays an important part in this work, helping us prevent the videos from being reshared in their entirety. (People are still able to share portions of the videos in order to condemn them or for public awareness, as many news outlets are doing in reporting the story online and on television). We are also working on improving our review processes. Currently, thousands of people around the world review the millions of items that are reported to us every week in more than 40 languages.

Beyond the events of Easter Sunday, Facebook has of course had a complicated history with the sharing of violence. “What happens when one of the largest proponents of live video struggles to manage its darker side?” wondered Jack Morse on Mashable just last month, after a sexual assault video had been shared — just two months after the live-streamed torture of a man in Chicago, which was watched by 16,000 people before it was taken down (it took 30 minutes for the company to get rid of it — and by that time it’d already been posted to YouTube).

After the Chicago incident, Reem Suleiman, a campaigner of the global advocacy group SumofUs (which demanded transparency from Facebook over its takedown process) spoke to the Guardian. “There’s a huge difference between using Facebook to expose violence and corruption and using it to violate, exploit and abuse people,” she said, acknowledging that the sharing of violence has also spread awareness of police brutality. In fact, SumofUs was one of the organizations — alongside the ACLU, Color of Change, and Center for Media Justice — that had written a letter to the company expressing concern over “the recent cases of Facebook censoring human rights documentation, particularly content that depicts police violence.” As the Guardian noted:

The campaign groups referenced the deactivation of Korryn Gaines’ account during a standoff with police, the suspension of live footage from the Dakota Access pipeline protests, the removal of historic photographs such as “napalm girl”, the disabling of Palestinian journalists’ accounts and reports of Black Lives Matter activists’ content being removed.

With Facebook’s push towards video (a push, it should be said, that’s thrown a lot of those old fashioned WORD-based news companies for a loop), images of violence have emerged more and more, opening the site up as a platform where hideous violent fantasies can become realities — but also where those fantasies can be condemned. What Facebook needs to hone far better is its ability to differentiate between the types of violence being shared, and the purpose of their circulation. The company claims it doesn’t want to enforce too much moderation when it gets in trouble for something awful being shared and not hastily taken down, but the amalgam of moderation tactics it does have in place has led to censorship of posts that’ve been shared for activist purposes.

“The more users are posting every aspect of their lives online, including criminally heinous conduct, the more companies have to take a proactive approach to content moderation rather than relying just on users to flag content for review,” online safety advisory company SSP Blue’s founder Hemanshu Nigam told the New York Times.

Perhaps sometime they’ll be able to differentiate between posts where killers revel in murdering people for a digital crowd and posts meant to condemn violence — or, for that matter, historic anti-war photographs.