Bots, Misinformation, and Polarization

By Omar Vazquez Duque

Fake accounts are in the headlines these days. While the news refers to how Elon Musk’s deal is on hold, the underlying controversy about the prevalence of fake accounts sheds light on a problem that affects democracies and societies worldwide. Operators of “bots” have utilized automated fake accounts to incite violence, disseminate false information, deceptively seek to influence political action, and achieve other criminal objectives. 

Twitter and the firms behind other platforms like Facebook and YouTube have spent years attempting to eradicate fake accounts, with modest success. These companies have increased their content moderation teams and even used artificial intelligence tools for tracking bots. But malicious actors are continually upgrading their modus operandi, making it harder to detect sham accounts. In its most recent transparency account, Twitter reported that user reports of spam increased by nearly 10 percent from the second half of 2020 to the first half of 2021, reaching 5.1 million. Between January 2018 and June 2021, people submitted 29.8 million spam reports. However, Twitter has not provided information regarding the impact of spam and false accounts on its platform—e.g., how active bots are and how much they tweet.

A couple of studies published in Nature shed light on this respect. Caldarelli, G., De Nicola, R., Del Vigna, F. et al. found that the most effective accounts in significantly propagating their messages have several bots—many more than average—among their followers. Besides, “the strongest hubs in the network share a relatively high number of bots as followers, which most probably aim at further increasing the visibility of the hubs’ messages via following and retweeting.” Another study by Chen, W., Pacheco, D., Yang, KC. et al. found that “[p]artisan accounts, especially conservative ones, tend to receive more followers and follow more automated accounts. Conservative accounts also find themselves in denser communities and are exposed to more low-credibility content.”

Considering both studies together, the main takeaway is that fake accounts boost the impact of impactful partisan accounts. Besides Musk’s acquisition of Twitter, another recurrent topic in recent news has been mass shootings—most of which have been motivated by racism. The EU is intending to address this problem with the Digital Services Act, which polices not only illegal but also “harmful” content. It is still unclear how this prohibition could be enforced, but at least the EU has been discussing how to tackle the problem. In contrast, the US has maintained a status quo that relies on platforms to behave autonomously in accordance with their community standards. Yet Self-regulation is no longer a viable strategy in light of the growing number of hate crimes and the dissemination of fake news. The US must study the practicality of government regulation to improve and standardize content moderation processes.

1 Caldarelli, G., De Nicola, R., Del Vigna, F. et al. The role of bot squads in the political propaganda on Twitter. Commun Phys 3, 81 (2020).

2 Chen, W., Pacheco, D., Yang, KC. et al. Neutral bots probe political bias on social media. Nat Commun 12, 5580 (2021)