8chan, the extremist message board used by three alleged killers in the past six months to distribute racist and white nationalist manifestos prior to mass shootings, lost the protection of an important security service provider when Cloudflare announced in a blogpost the company would be “terminating 8chan as a client”.
Cloudflare’s decision on Sunday may jeopardize 8chan’s ability to remain on the open web in the near term, because the company has been protecting 8chan from distributed denial of service (DDoS) attacks, and the site is a likely target for internet vigilantes.
It is also represents a complete reversal of Cloudflare’s position from less than 24 hours earlier, when the cofounder and chief executive Matthew Prince defended his company’s relationship with 8chan as a “moral obligation” in an extensive interview with the Guardian.
“The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths,” Prince wrote in a blogpost announcing the decision. “Even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have created an environment that revels in violating its spirit.”
But Prince reiterated his belief that the action by Cloudflare will not actually create more safety or reduce hatred on the internet.
“While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online,” he wrote. “It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so disenchanted they turn to hate. In taking this action we’ve solved our own problem, but we haven’t solved the Internet’s.”
The controversy over Cloudflare, which played a role in keeping 8chan online but neither hosted nor promoted its content, is just the latest example of the struggles faced by US technology companies to address the spread of potentially dangerous content that is nevertheless legal.
For decades, American technology companies maintained a laissez-faire approach toward user-generated content, even as social media companies including Facebook and YouTube crafted algorithms that amplified bigoted speech to massive audiences. In recent years, as evidence has mounted that online incitement can lead to real-world violence, from Myanmar and Charlottesville to Christchurch and El Paso, pressure has mounted on the various companies to reconsider their approaches.
More details soon…