Since the 2016 presidential election, there’s been no shortage of reports about false news being shared across social media platforms like Facebook and Twitter — and with the 2020 vote only a year away, the question is not when will the misinformation strike, but how can we guard against it?
MIT professor of IT and marketing Sinan Aral and assistant professor of marketing Dean Eckles propose a four-step process for researchers to measure and analyze social media manipulation, and to turn that data into a defense against future manipulation.“Without an organized research agenda that informs policy, democracies will remain vulnerable to foreign and domestic attacks,” the professors write in an article for the August 30 edition of Science magazine.
Here is the four-step research agenda the pair propose:
Catalogue exposures to manipulation
To defend against manipulation, Aral and Eckles write, researchers need to index a variety of social media information:
- What texts, images, and video messages were advertised?
- What type of advertisement was used (organically posted, advertised, or “boosted” through paid promotion)?
- What social platforms were these texts, images, and video messages appearing on?
- When and how were they shared and re-shared by users (in this case, voters)?
The content of these messages, images, and texts also needs to be analyzed to measure effectiveness, the researchers write.
Combine exposure and voting behavior datasets
In the past, public voting records and social media accounts were compared using data like self-reported profile information. But this type of comparison can be improved by using location data already being collected by social media companies, the researchers write.
This could be something like matching voter registration with home addresses based on mobile location information — the same data used for marketing purposes by social media companies.
Assess the effectiveness of manipulative messages
One challenge of studying voter behavior, Aral and Eckles write, is that the results aren’t always accurate enough to answer questions.
Social media companies already run A/B and algorithm tests, Aral and Eckles write. The same tests could be used to measure exposure effects.
Calculate consequences of voting behavior changes
Aral and Eckles write that measures like predicted voter behavior — with or without exposure to misinformation — should be combined with data like geographic and demographic characteristics for a particular election. This would help with vote total estimates in a particular area.
The privacy implications
Aral and Eckles write that they don’t support using any and all existing data, but instead, encourage collaboration with social media platforms, as well as “privacy-preserving methods for record linkage and content analysis,” to meet privacy and protection needs.
“Achieving a scientific understanding of the effects of social media manipulation on elections is an important civic duty,” Aral and Eckles write. “The sooner we begin a public discussion of the trade-offs between privacy, free speech, and democracy that arise from the pursuit of this science, the sooner we can realize a path forward.”