Despite the role it may have played in the horrific events in Buffalo, the platform and its owner have not issued any statement. Links to copies of the graphic shooting video and praise for the gunman continue to pop up around the platform. This lack of action reveals a complicated truth about the internet landscape: An online platform that dismisses outside criticism from users and advertisers can host racist hate speech and facilitate user radicalization with few consequences.
In a 180-page document that is believed to be authored by the alleged murder suspect, he said he began visiting the online forum site 4chan in 2020, drawing inspiration from racist and hateful threads and weapons forums. He also appears to have hinted at his plans on 4chan, according to an online diary that has been attributed to the suspect.
4chan did not respond to repeated requests for comment from CNN Business. A direct inquiry sent to 4chan’s current owner, Hiroyuki Nishimura, also went unanswered.
The site — a barebones, forum-based site reminiscent of the early internet where users post anonymously — hosts a variety of communities where hate speech is tolerated or celebrated. While major platforms like Facebook and Twitter have multi-faceted terms of service agreements for users that lay out prohibited behavior like hate speech, harassment, racist speech, and more, 4chan has bucked the trend of social platforms adopting increasingly robust content moderation policies.
Instead, it exists outside of mainstream social media norms. It’s a place where some users discuss everyday news about anime and video games, but it is also a forum where damaging content that would not be allowed on more mainstream social media platforms has flourished. It is where nude photos of female celebrities have previously been leaked and disseminated, where racism and anti-semitism is cheered, and where QAnon, the conspiracy cult, originated.
The site lists a series of rules and warns users that “if we reasonably think you haven’t followed these rules, we may (at our own discretion) terminate your access to the site.” But it’s not clear if or how the rules — which prohibit, for example, posting personal information or sharing content that violates US law — are enforced. In some cases, they appear to be ignored; for example, they state that racist posts are only allowed on a certain thread, but rampant racism is easily found throughout the site.
Immediately following Saturday’s shooting, some of those same forums on 4chan were used to help disseminate the shooter’s video — which otherwise might only have been viewed by the approximately 20 people who watched the livestream before it was removed by game streaming site Twitch — writings purportedly attributed to him. Days later, they remain online and, in some cases, continue to feature praise of the shooter or support for the conspiracy theories that appear to have motivated him. Links to copies of the graphic video in which the gunman shoots innocent customers and his alleged writings have continued to pop up around the site. Other, similar sites like Gab and Kiwi Farms were also used in the wake of the attack to distribute the video of the shooting and the alleged shooter’s writings, according to online extremism researcher
Ben Decker. In an unsigned email Kiwi Farms in response to a CNN request for comment, the site said it considered the video “safe to host” after it originally aired on Twitch. (Twitch says it removed the video from its site within two minutes of the attack starting.) Gab did not respond to a request for comment.
In the wake of the Buffalo shooting, many of the major social media platforms “did go to significant lengths” to quickly remove content related to the attack, “but there’s a real problem, which is that there are some platforms who are kind of holdouts that ruin it for everyone,” said Tim Squirrell, communications head at the think tank Institute for Strategic Dialogue. “The consequence of that is that you can never complete the game of whack-a-mole. There’s always going to be somewhere circulating [this content],” he said.
Squirrell added that such platforms’ opposition to removing or moderating content is why footage of the 2019 racist mass shooting in Christchurch, New Zealand “is still available even now, three years later, because you can never stop them all.” In the document believed to be authored by the alleged Buffalo shooter, he described being radicalized by the livestream of this 2019 shooting.
Limits of the law
4chan was created in 2003 — a year before Facebook was launched — by a 15-year-old as an online bulletin board allowing users to post anonymously, and later sold to Nishimura
. Like more mainstream platforms, 4chan is populated by “user generated content.” In the United States, platforms that rely on user-generated content are legally protected from accountability for the vast majority of what their users post by a law called Section 230, which largely shields social media companies from liability over content published on their platforms.
Despite that legal protection, many of the Big Tech platforms have in recent years ramped up their efforts to moderate and remove certain harmful content — including hate speech and conspiracy theories — in response to pressures from advertisers and as they seek to maintain a broad base of users and an attempt to stay in the good graces of lawmakers.
While Big Tech platforms remain far from perfect, those pressures have led to progress. In 2020, for example, Facebook faced a major pressure campaign by dozens of advertisers called #StopHateForProfit over its decision to not take action against incendiary posts by then-President Donald Trump. Within days, Facebook CEO Mark Zuckerberg made new promises to ban hateful ads and label controversial posts from politicians. Many major social media platforms also evolved their policies on misinformation in response to calls from lawmakers and public health officials at the outset of the Covid-19 pandemic.
But for sites like 4chan, which don’t rely on mainstream advertisers and seek to be homes for content prohibited on other platforms, rather than platforms broadly adopted by many users, there are few incentives to remove harmful or dangerous content. In an email to CNN in 2016, 4chan owner Nishimura said he “personally [doesn’t] like sexists and racists … [but] If I like[d] censorship, I would have already [done] that.”
An extreme intervention with historical precedent would be a move by the internet infrastructure companies that allow sites like 4chan to exist. A similar site called 8chan, which was spun out of 4chan several years ago, has struggled to stay online since the internet infrastructure company Cloudflare stopped supporting it in 2019 after authorities believe it was used by the alleged gunman in the El Paso Walmart shooting to post white nationalist writings.
4chan is “intentionally sort of this censorship-free platform, but they have cloud providers and other [internet service providers] they rely on to exist,” said Decker, who is also CEO of digital investigations consultancy Memtica. In theory, those internet service providers could say, “we will not allow for this content anywhere, on any entity that uses our tech,” which could force 4chan and sites like it to implement stronger moderation practices.
Still, even that is not a surefire means of reining in such platforms. As the ranks of online platforms dedicated to supporting “free speech” at all costs have grown, internet service providers espousing similar views have also emerged.
One recent example: Parler, the alternative social media platform popular with conservatives, briefly disappeared from the internet in early 2021 after it was booted from Amazon’s cloud service because it was heavily used by supporters of then-US President Donald Trump, some of whom participated in the January 6 Capitol Riot. But weeks later, Parler reemerged online with the help of a small web hosting firm called SkySilk, whose chief executive told the New York Times
he was helping to support free speech.