On March 15, 2019, a heavily armed white supremacist named Brenton Tarrant walked into two separate mosques in Christchurch, New Zealand, and opened fire, killing 51 Muslim worshipers and wounding countless others. Close to 20 minutes of the carnage from one of the attacks was livestreamed on Facebookâand when the company tried taking it down, more than 1 million copies cropped up in its place.
While the company was able to quickly remove or automatically block hundreds of thousands of copies of the horrific video, it was clear that Facebook had a serious issue on its hands: Shootings arenât going anywhere, and livestreams arenât either. In fact, up until this point, Facebook Live had a bit of a reputation as a place where you could catch streams of violenceâincluding some killings.
Christchurch was different.
An internal document detailing Facebookâs response to the Christchurch massacre, dated June 27, 2019, describes steps taken by the companyâs task force created in the tragedyâs wake to address users livestreaming violent acts, illuminating the failures of the companyâs reporting and detection methods before the shooting began, how much it changed about its systems in response to those failuresâand how much further its systems still have to go.
More: Here Are All the âFacebook Papersâ Weâve Published So Far
The 22-page document was made public as part of a growing trove of internal Facebook research, memos, employee comments, and more captured by Frances Haugen, a former employee at the company who filed a whistleblower complaint against Facebook with the Securities and Exchange Commission. Hundreds of documents have been released by Haugenâs legal team to select members of the press, including Gizmodo, with unnumbered more expected to arrive over the coming weeks.
Facebook relies heavily on artificial intelligence to moderate its sprawling global platform, in addition to tens of thousands of human moderators who have historically been subject to traumatizing content. However, as the Wall Street Journal recently reported, additional documents released by Haugen and her legal team show that even Facebookâs engineers doubt AIâs ability to adequately moderate harmful content.
Facebook did not yet respond to our request for comment.
You could say that the companyâs failures started the moment the shooting did. âWe did not proactively detect this video as potentially violating,â the authors write, adding that the livestream scored relatively low on the classifier used by Facebookâs algorithms to pinpoint graphically violent content. âAlso no user reported this video until it had been on the platform for 29 minutes,â they added, noting that even after it was taken down, there were already 1.5 million copies to deal with in the span of 24 hours.
Further, its systems were apparently only able to detect any sort of violent violations of its terms of service âafter 5 minutes of broadcast,â according to the document. Five minutes is far too slow, especially if youâre dealing with a mass shooter who begins filming as soon as the violence starts, the way Tarrant did. For Facebook to reduce that number, it needed to train its algorithm, just as data is needed to train any algorithm. There was just one gruesome problem: there werenât a lot of livestreamed shootings to get that data from.
The solution, according to the document, was to create what sounds like one of the darkest datasets known to man: a compilation of police and bodycam footage, ârecreational shootings and simulations,â and assorted âvideos from the militaryâ acquired through the companyâs partnerships with law enforcement. The result was âFirst Person Shooter (FPS)â detection and improvements to a tool called XrayOC, according to internal documents, which enabled the company to flag footage from a livestreamed shooting as obviously violent in about 12 seconds. Sure, 12 seconds isnât perfect, but itâs profoundly better than 5 minutes.
The company added other practical fixes, too. Instead of requiring that users jump through multiple hoops to report âviolence or terrorismâ happening on their stream, Facebook figured that it might be better to let users report it in one click. They also added a âTerrorismâ tag internally to better keep track of these videos once they were reported.
Next on the list of âthings Facebook probably should have had in place way before broadcasting a massacre,â the company put some restrictions on who was allowed to go Live at all. Before Tarrant, the only way you could get banned from livestreaming was by violating some sort of platform rule while livestreaming. As the research points out, an account that was internally flagged as, say, a potential terrorist âwouldnât be limitedâ from livestreaming on Facebook under these rules. After Christchurch, that changed; the company rolled out a âone-strikeâ policy that would keep anyone caught posting particularly egregious content from using Facebook Live for 30 days. Facebookâs âegregiousâ umbrella includes terrorism, which applies to Tarrant.
Of course, content moderation is a dirty, imperfect job carried out, in part, by algorithms that, in Facebookâs case, are often just as flawed as the company that made them. These systems didnât flag the shooting of a retired police chief David Dorn when it was caught on Facebook Live last year, nor did it catch a man who livestreamed his girlfriendâs shooting just a few months later. And while the hours-long obvious bomb threat that was livestreamed on the platform by a far-right extremist this past August wasnât as explicitly horrific as either of those examples, it was also a literal bomb threat that was able to stream for hours.
Regarding the bomb threat, a Facebook spokesperson told Gizmodo: âAt the time, we were in contact with law enforcement and removed the suspectâs videos and profile from Facebook and Instagram. Our teams worked to identify, remove, and block any other instances of the suspectâs videos which do not condemn, neutrally discuss the incident or provide neutral news coverage of the issue.â
Still, itâs clear the Christchurch disaster had lasting effect on the company. âSince this event, weâve faced international media pressure and have seen legal and regulatory risks on Facebook increase considerably,â reads the document. And thatâs an understatement. Thanks to a new Australian law that was hastily passed in the wake of the shooting, Facebookâs executives could face steep legal fees (not to mention jail time) if they were caught allowing livestreamed acts of violence like the shooting on their platform again.
This story is based on Frances Haugenâs disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were obtained by a consortium of news organizations, including Gizmodo, the New York Times, Politico, the Atlantic, Wired, the Verge, CNN, and dozens of other outlets.