For social media companies, the war has just begun


X Scalper

This is war.

That might as well have been the mantra of technology companies ahead of these midterm elections. They were routed by disinformation-disseminating enemies both foreign and domestic two years ago — but 2018, they pledged, would be different. It was time to move out from their defensive positions and mount a sortie.

So who won? The answer is your favorite Facebook relationship status: It’s complicated.

Plumb the depths of any platform today, or even skim the surface, and things look grim. Every day, it seems, another story appears detailing a newly discovered network of automated accounts. They’re boosting what looks like an image of violent participants in that infamous immigration caravan President Trump keeps harping on — but turns out to be a photograph of Palestinians throwing rocks at Israeli tanks . . . in 1987. They’re telling Democrats where and when to vote — but they’re saying the machines are broken, or the lines are long, or that votes for candidates with a “D” next to their name must be cast on Wednesday.

And on the eve of the election, Facebook identified an alleged Russian interference operation on its platform that led to the removal of 115 accounts.

This is, to put it mildly, not good. But the fact that exponentially more disinformation is being identified does not mean exponentially more disinformation is out there. It means that we’re paying more attention, and that companies are, too.

The past two years have felt like two eternities; to many, the techlash must seem like a permanent aspect of American culture. But it’s not. Before 2016, companies could still mosey along telling Americans they had opened up society or connected the world, and, gee whiz, had life gotten better. Then, when the alarms around Russian influence began to blare, platforms kept trying to cover their ears. For months, they wouldn’t even admit there was a problem. Once they did, they wouldn’t admit it was their responsibility to solve it.

In that sense, companies have made progress. The problem is, the bad guys have made progress, too. And they had a head start.

As tech companies have innovated, the ill-intentioned have innovated along with them. Platforms became faster at finding and fighting foreign disinformation, but then right-wingers co-opted the Kremlin’s tactics to mount coordinated campaigns made in the United States. Platforms started to pay attention to how inauthentic accounts populated their content on public pages, but then the misinformation-mongers relocated to private groups where it was harder to ferret out lies. Even a transparency tool from Facebook to tell users where ads came from turned into an opportunity to fill in a “paid-for” box with falsehoods or obscure who manages the page running a given ad.

Discouraging as this is, it’s also no surprise. The War of 2018 has been, for tech companies, a mad scramble through the mud toward higher ground. What may end up mattering far more than their frantic efforts to fix things after the fact, especially in response to prodding from journalists and researchers, is the chance they will have now amid the relative calm to mobilize reinforcements and rethink strategy.

That means, for one thing, developing a plan that confronts disinformation head on — not only when it comes from Russians but also when it comes from Americans. Platforms have focused so far on removing automated accounts and what they call “coordinated inauthentic behavior.” There’s a reason for that: These removals skirt free-speech concerns because it’s easy to argue that those disciplined were not contributing to the marketplace of ideas but, instead, rigging it.

But false content distorts the public conversation whether its propagators are lying about who they are or not. Sites still need to lay out clearer principles articulating their role in confronting disinformation of all types. That will involve coming to a consensus on what disinformation and false news mean. Then, platforms must develop strategies based on those definitions that apply not only to automated and semi-automated accounts but also to professional trolls.

Sites will have to focus especially on outlets at the core of false news networks — those that overwhelmingly publish counterfeit content and then rely on bots and cyborgs, or semi-automated accounts, to distribute it. Many compare bringing down the hammer on bad actors to whack-a-mole, but when it comes to the sources with the farthest reach, the game is more like whack-a-mammoth. And that should be easier for platforms to play.

None of this will work if sites go it alone. Both Facebook and Twitter, in an attempt to halt voter suppression online ahead of the midterms, created reporting channels for state and party officials doing their own monitoring; these strategies should apply to politically oriented malicious content across the spectrum. Platforms should also partner with researchers and provide them the access they require to track campaigns in real time. To win a war, you need your own army. You also need allies.

The election is over, but in the Internet realm there is still no winner. That’s because the battle has only just begun.




Be the first to comment

Leave a Reply

Your email address will not be published.


*