Why more platforms need to close the stochastic terrorism loophole
Today let’s talk about Kiwi Farms, Cloudflare, and whether infrastructure providers ought to take more responsibility for content moderation than they have generally taken.
Kiwi Farms is a nearly 10-year-old web forum, founded by a former administrator for the popular QAnon wasteland 8chan, that has become notorious for waging online harassment campaigns against LBGT people, women, and others. It came to popular attention in recent weeks after a well known Twitch creator named Clara Sorrenti spoke out against the recent wave of anti-trans legislation in the United States, leading to terrifying threats and violence against her by people who organized on Kiwi Farms.
Ben Collins and Kat Tenbarge wrote about the situation at NBC:
Sorrenti, known to fans of her streaming channel as “Keffals,” says that when her front door opened on Aug. 5 the first thing she saw was a police officer’s gun pointed at her face. It was just the beginning of a weekslong campaign of stalking, threats and violence against Sorrenti that ended up making her flee the country.
Police say Sorrenti’s home in London, Ontario, had been swatted after someone impersonated her in an email and said she was planning to perpetrate a mass shooting outside of London’s City Hall. After Sorrenti was arrested, questioned and released, the London police chief vowed to investigate and find who made the threat. Those police were eventually doxxed on Kiwi Farms and threatened. The people who threatened and harassed Sorrenti, her family and police officers investigating her case have not been identified.
In response to the harassment, Sorrenti began a campaign to pressure Cloudflare into no longer providing its security services to Kiwi Farms. Thanks to her popularity on Twitch, and the urgency of the issue, #DropKiwiFarms and #CloudflareProtectsTerrorists both trended on Twitter. And the question became what Cloudflare — a company that has been famously resistant to intervening in matters of content moderation — would do about it.
Most casual web surfers may be unaware of Cloudflare’s existence. But the company’s offerings are essential to the functioning of the internet. And it provided at least three services that have been invaluable to Kiwi Farms.
One, Cloudflare made Kiwi Farms faster and thus easier to use, by generating thousands of copies of it and storing it at end points around the world, where they could be more quickly delivered to end users. Two, it protected Kiwi Farms from distributed denial-of-service (DDoS) attacks, which can crash sites by overwhelming them with bot traffic. And third, as Alex Stamos points out here, it hid the identity of their web hosting company, preventing people from pressuring the hosting provider to take action against it.
Cloudflare knew it was doing all this, of course, and it has endeavored to make principled arguments for doing so. Twice before in its history, it has confronted related high-profile controversies in moderation — once in 2017, when it turned off protection for the neo-Nazi site the Daily Stormer, and again in 2019, when it did the same for 8chan. In both cases, the company took pains to describe the decisions as “dangerous” — warning that it would create more pressure on infrastructure providers to shut down other websites, a situation that would likely disproportionately hurt marginalized groups.
Last week, as pressure on the company to do something about Kiwi Farms grew, Cloudflare echoed that sentiment in a blog post. (One that did not mention Kiwi Farms by name.) Here are CEO Matthew Prince and head of public policy Alissa Starzak:
“Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.”
It’s admirable that Cloudflare has been so principled in developing its policies and articulating the rationale behind them. And I share the company’s basic view of the content moderation technology stack: that the closer you get to hosting, recommending, and otherwise driving attention to content, the more responsibility you have for removing harmful material. Conversely, the further you get from hosting and recommending, the more reluctant you should be to intervene.
The logic is that it is the people hosting and recommending who are most directly responsible for the content being consumed, and who have the most context on what the content is and why it might (or might not be) a problem. Generally speaking, you don’t want Comcast deciding what belongs on Instagram.
Cloudflare also argues that we should pass laws to dictate what content should be removed, since laws emerge from a more democratic process and thus have more legitimacy. I’m less sympathetic to the company on that front: I like the idea of making content moderation decisions more accountable to the public, but I generally don’t want the government intervening in matters of speech.
However principled these policies are, though, they are undeniably convenient to Cloudflare. They allow the company to rarely have to consider content moderation issues, and this has all sorts of benefits. It helps Cloudflare serve the largest number of customers; keep it out of hot-button cultural debates; and stay off the radar of regulators who are increasingly skeptical of tech companies moderating too little — or too much.
Generally speaking, when companies can push content moderation off on someone else, they do. There’s generally very little upside in policing speech, unless it’s necessary for the survival of the business.
But I want to return to that sentiment in the company’s blog post, the one that says: “Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online.” The idea is that Cloudflare wants to take DDoS and other attacks off the table for everyone, both good actors and bad, and that harassment should be fought in (unnamed) other ways.
Certainly it would be a good thing if everyone from local police departments to national lawmakers took online harassment more seriously, and developed a coordinated strategy to protect victims from doxxing, swatting, and other common vectors of online abuse — while also doing better at finding and prosecuting their perpetrators.
In practice, though, they don’t. And so Cloudflare, inconvenient as it is for the company, has become a legitimate pressure point in the effort to stop these harassers from threatening or committing acts of violence. Yes, Kiwi Farms could conceivably find other security providers. But there aren’t that many of them, and Cloudflare’s decision to stop services for the Daily Stormer and 8chan really did force both operations further underground and out of the mainstream.
And so its decision to continue protecting Kiwi Farms arguably made it complicit in whatever happened to poor Sorrenti, and anyone else the mob might decide to target. (Three people targeted by Kiwi Farms have died by suicide, according to Gizmodo.)
And while we’re on the subject of complicity, it’s notable that for all its claims about wanting to bring about an end to cyberattacks, Cloudflare provides security services to… makers of cyberattack software! That’s the claim made in this blog post from Sergiy P. Usatyuk, who was convicted of running a large DDoS-for-hire scheme. Writing in response to the Kiwi Farms controversy, Usatyuk notes that Cloudflare profits from such schemes because it can sell protection to the victims.
In its blog post, Cloudflare compares itself to a fire department that puts out fires no matter how bad a person the resident of the house may be. In response, Usatyuk writes: “CloudFlare is a fire department that prides itself on putting out fires at any house regardless of the individual that lives there. What they forget to mention is they are actively lighting these fires and making money by putting them out!”
Again, none of this is to say that there aren’t good reasons for Cloudflare to stay out of most moderation debates. There are! And yet it does matter to whom the company decides to deploy its security guards — a service it often provides for free, incidentally — enabling harassment and worse for a small but committed group of the worst people on the internet.
In the aftermath of Cloudflare’s initial blog post, Stamos predicted the company’s stance wouldn’t hold. “There have been suicides linked to KF, and soon a doctor, activist or trans person is going to get doxxed and killed or a mass shooter is going to be inspired there,” he wrote. “The investigation will show the killer’s links to the site, and Cloudflare’s enterprise base will evaporate.”
Fortunately, it hasn’t yet come to that. But credible threats against individuals did escalate over the past several days, the company reported, and on Saturday Cloudflare did indeed reverse course and stopped protecting Kiwi Farms.
“This is an extraordinary decision for us to make and, given Cloudflare’s role as an Internet infrastructure provider, a dangerous one that we are not comfortable with,” Prince wrote in a new blog post. “However, the rhetoric on the Kiwi Farms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwi Farms or any other customer before.”
It feels like a massive failure of social policy that the safety of Sorrenti and other people targeted by online mobs comes down to whether a handful of companies will agree to continue protecting their organizing spaces from DDoS attacks, of all things. In some ways, it feels absurd. We’re offloading what should be a responsibility of law enforcement onto a for-profit provider of arcane internet backbone services.
“We do not believe we have the political legitimacy to determine generally what is and is not online by restricting security or core Internet services,” the company wrote last week. And arguably it doesn’t!
But sometimes circumstances force your hand. If your customers are plotting violence — violence that may in fact be possible only because of the services you provide — the right thing to do isn’t to ask Congress to pass a law telling you what to do. It’s to stop providing those services.
There isn’t always a clear moment when an edgy forum, full of trolls, tips over into incitement of violence. Instead, far-right actors increasingly rely on “stochastic terrorism” — actively dehumanizing groups of people over long periods of time, suggesting that it sure would be nice if someone did something about “the problem,” confident that some addled member of their cohort will eventually take up arms in an effort to impress their fellow posters.
One reason why this has been so effective is that it is a strategy designed to resist content moderation. It offers cover to the many social networks, web hosts, and infrastructure providers that are looking for reasons not to act. And so it has become a loophole that the far right can exploit, confident that so long as they don’t explicitly call for murder they will remain in the good graces of the platforms.
It’s time for that loophole to close. In general we should resist calls for infrastructure providers to intervene on matters of content moderation. But when those companies provide services that aid in real-world violence, they can’t turn a blind eye until the last possible moment. Instead, they should recognize groups that organize harassment campaigns much earlier, and use their leverage to prevent the loss of life that will now forever be linked to Kiwi Farms and the tech stack upon which it sat.
In its blog posts, Cloudflare refers repeatedly to its desire to protect vulnerable and marginalized groups. Fighting for a free and open internet, one that is resistant to pressure from authoritarian governments to shut down websites, is a critical part of that. But so, too, is offering actual protection to the vulnerable and marginalized groups that are being attacked by your customers.
I’m glad Cloudflare came around in the end. Next time, I hope it will get there faster.
Twitch begins testing paid ‘Elevated Chat’ feature
Intel’s RTX 3060 competitor is priced at just $289
Meta permanently bans Pornhub’s Instagram account
The Try Guys cut ties with Ned Fulmer after Redditors uncover cheating scandal
Wait, are you guys bringing your phones into the shower?
Acer Predator Triton 500 SE review: I expected more
NASA scraps Artemis I launch due to potential hurricane threat
Why does Logitech have a white Xbox Series X? (Because they put a skin on it.)
The best entertainment of 2022
How to customize your iPhone’s app icons
Join us On Facebook
Tech News3 months ago
How one senator’s broken hip puts net neutrality at risk
Tech News2 weeks ago
Elden Ring is getting its own board game
Tech News3 months ago
Elon Musk wants Twitter trial to wait until February 2023
Tech News3 months ago
Massive Rogers outage cut off 25 percent of Canada’s internet traffic for nearly all of Friday
Tech News2 months ago
This customizable smart display is a fun desk accessory in need of a purpose
Tech News3 months ago
The best Sonos speaker and soundbar deals
Tech News3 months ago
Elon Musk says he’ll increase childcare benefits at his companies ‘significantly’
Tech News1 month ago
Lego’s new motorized lighthouse has a working fresnel lens