oai:arXiv.org:2305.06125
Computer Science
2023
27.09.2023
Artificial intelligence (AI)-powered recommender systems play a crucial role in determining the content that users are exposed to on social media platforms.
However, the behavioural patterns of these systems are often opaque, complicating the evaluation of their impact on the dissemination and consumption of disinformation and misinformation.
To begin addressing this evidence gap, this study presents a measurement approach that uses observed digital traces to infer the status of algorithmic amplification of low-credibility content on Twitter over a 14-day period in January 2023.
Using an original dataset of 2.7 million posts on COVID-19 and climate change published on the platform, this study identifies tweets sharing information from low-credibility domains, and uses a bootstrapping model with two stratifications, a tweet's engagement level and a user's followers level, to compare any differences in impressions generated between low-credibility and high-credibility samples.
Additional stratification variables of toxicity, political bias, and verified status are also examined.
This analysis provides valuable observational evidence on whether the Twitter algorithm favours the visibility of low-credibility content, with results indicating that tweets containing low-credibility URL domains perform significantly better than tweets that do not across both datasets.
Furthermore, high toxicity tweets and those with right-leaning bias see heightened amplification, as do low-credibility tweets from verified accounts.
This suggests that Twitter s recommender system may have facilitated the diffusion of false content, even when originating from notoriously low-credibility sources.
Corsi, Giulio, 2023, Evaluating Twitter's Algorithmic Amplification of Low-Credibility Content: An Observational Study