How the Buffalo shooting livestream went viral

When a gunman pulled into the parking lot of a supermarket in Buffalo, New York on Saturday for a racist attack on a black community, his camera was already rolling.

CNN reports that a live stream on Twitch, recorded from the suspect’s point of view, showed buyers in the parking lot as the alleged gunman arrived, then followed him inside as he started a frenzy that killed 10 people and injured three. Popular for gaming live streams, Twitch removed the video and suspended the user “less than two minutes after the violence began,” according to Samantha Faught, the company’s head of communications for America. Only 22 people saw the attack take place online in real time. The Washington Post reports.

But millions saw the livestream images afterwards. Copies and links to the reposted video circulated online after the attack, spreading to major platforms such as Twitter and Facebook, as well as lesser-known sites such as Streamable, where the video was viewed more than 3 million times, according to The New York Times

This is not the first time that mass shooting perpetrators have broadcast their violence live online, after which footage has spread. In 2019, a gunman attacked mosques in New Zealand’s Christchurch while live-streaming his killings on Facebook. The platform said it had deleted 1.5 million videos of the attack in the 24 hours since. Three years later, with footage of Buffalo re-uploaded and re-shared days after the deadly attack, platforms continue to struggle to stem the tide of violent, racist and anti-Semitic content created from the original.

Moderating live streams is especially difficult because things unfold in real time, said Rasty Turek, CEO of Pex, a company that makes content identification tools. Turek, who spoke to The edge after the Christchurch shooting, says that if Twitch were indeed able to disrupt the stream and stop within two minutes of it starting, that response would be “ridiculously fast.”

“Not only is that not an industry standard, that’s an achievement that was unprecedented compared to many other platforms like Facebook,” Turek said. Faught says Twitch removed the stream midway through the broadcast, but has not responded to questions about how long the alleged shooter had been broadcasting before the violence began or how Twitch was initially warned about the stream.

Because live streaming has become so widely available in recent years, Turek acknowledges that it’s impossible to reduce response time to zero for moderation — and perhaps not the right way to think about the problem. What is more important is how platforms deal with copies and reuploads of the malicious content.

“The challenge is not how many people are watching the live stream,” he says. “The challenge is what happens to that video after that.” In the case of the livestream recording, it spread like a contagion: according to The New York TimesFacebook posts linking to the Streamable clip generated more than 43,000 interactions while the posts lingered for over nine hours.

Major technology companies have developed a content detection system for these kinds of situations. Founded in 2017 by Facebook, Microsoft, Twitter and YouTube, the Global Internet Forum to Counter Terrorism (GIFCT) was created with the aim of preventing the spread of terrorist content online. After the attacks in Christchurch, the coalition said it would start monitoring far-right content and groups online, having previously focused on Islamic extremism. Material related to the Buffalo shooting — such as hashes of the video and the manifesto that the shooter allegedly posted online — was added to the GIFCT database, theoretically allowing platforms to capture and remove reposted content automatically.

But even with GIFCT as the central response to moments of crisis, implementation remains a problem, Turek says. While coordinated efforts are admirable, not every company participates and its practices are not always clearly executed.

“You have a lot of these smaller companies that essentially don’t have the resources [for content moderation] and it doesn’t matter,” Turek says. “They don’t have to.”

Twitch says it caught the stream fairly early — the Christchurch shooter was able to broadcast 17 minutes on Facebook — and says it’s checking for restreams. But Streamable’s slow response means that by the time the reposted video was taken down, millions had watched the clip and a link to it had been shared hundreds of times on Facebook and Twitter, according to The New York Times† Hopin, the company that owns Streamable, did not respond to: The edge‘s request for comment.

While the Streamable link has been removed, parts and screenshots of the recording are easily accessible on other platforms such as Facebook, TikTok and Twitter where it has been re-uploaded. Those major platforms then had to scramble to remove and suppress the re-shared versions of the video.

Content filmed by the Buffalo shooter has been removed from YouTube, said Jack Malon, a company spokesperson. Malon says the platform also “features prominently in videos from authoritative sources in searches and recommendations.” Search results on the platform return news segments and official press conferences, making it harder to find reuploads that slip through anyway.

Twitter is “removing videos and media related to the incident,” said a company spokesperson who declined to be named for security reasons. TikTok did not respond to multiple requests for comment. But days after the shooting, parts of the video that users re-uploaded to Twitter and TikTok remain.

Meta spokesman Erica Sackin says multiple versions of the suspect’s video and screed are being added to a database to help Facebook detect and remove content. Links to external platforms hosting the content are permanently blocked.

But even into the week, clips continued to circulate that appeared to be coming from the live stream. on Monday afternoon, The edge viewed a Facebook post containing two clips from the alleged live stream, one in which the attacker drives into the parking lot talking to himself and another in which a person points a gun at someone in a store while they scream in fear. The gunman mumbles an apology before continuing, and a caption on the clip suggests the victim was spared because they were white. Sackin confirmed that the content violated Facebook’s policy and the post was removed shortly after The edge asked about it.

As it’s made its way around the web, the original clip has been cut and spliced, remixed, partially censored, and otherwise edited, and its widespread reach means it’s probably never going to go away.

It’s essential to recognize this reality and figure out how to move forward, says Maria Y. Rodriguez, an assistant professor at the University of Buffalo School of Social Work. Rodriguez, who studies social media and its effects on communities of color, says moderation and preserving free speech takes online discipline, not just around Buffalo content, but in the day-to-day decisions that platforms make.

“Platforms need some regulatory support that can provide some parameters,” Rodriguez says. Standards about how platforms detect violent content and what moderation tools they use to surface harmful material are needed, she says.

Certain platform practices can minimize harm to the public, such as sensitive content filters that allow users to view or simply scroll past potentially shocking material, Rodriguez says. But hate crimes are not new and similar attacks are likely to happen again. Moderation, if done effectively, could limit the spread of violent material — but what to do with the perpetrator is what has kept Rodriguez up at night.

“What do we do with him and other people like him?” she says. “What do we do with the content creators?”

Leave a Reply

Your email address will not be published.