Research
-
Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform
We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.
Coverage in USA Today, Popular Science and CNET.
-
Moderating with the Mob: Evaluating the Efficacy of Real Time Crowdsourced Fact Checking
Reducing the spread of false and misleading news remains a challenge for social media platforms, as the current strategy of using third-party fact-checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Recent research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of groups of ordinary users to assess the veracity of online information. Using a pre-registered research design, we investigate the effectiveness of crowdsourced fact checking in real time. We select popular news stories in real time and have them evaluated by both ordinary individuals and professional fact checkers within 72 hours of publication. Our data consists of 21,531 individual evaluations across 135 articles published between November 2019 and June 2020. Although we find that machine learning based models (that use the crowd as input) perform significantly better than simple aggregation rules at identifying false news, our results suggest that neither approach can perform at the level of professional fact checkers.
Coverage in The Washington Post and The Platformer.
-
Do Your Own Research? Searching for Additional Information Online Increases Belief in Misinformation
In an effort to reduce the spread of and belief in fake news, social media companies and civil society organizations have encouraged online news consumers to research the news they believe may be suspect online. This suggestion is quite prevalent, but we know little about its effectiveness. We test this intervention and surprisingly find that encouraging individuals to search for information to inform one’s evaluation of a false article's veracity increases the likelihood that an individual believes it. Supplementary evidence from web-tracking data and Google search results suggests that news consumers encounter news from low-quality sources when they research false articles and this exposure increases belief in false articles.
-
Testing The Effect of Information on Discerning the Veracity of News in Real-Time
Despite broad interest in curbing belief in fake news online, relatively little is known about the marginal effect of providing individuals with information about an article on their ability to correctly discern the veracity of news in real-time. To this end, we used a series of pre-registered experiments in two separate studies to test the marginal effect of three pieces of information about an article that have been the subject of broad scientific and popular interest: external information, source information, and information in the text of the article. This produced three important findings. First, source information increases belief in news articles from mainstream sources, but decreases belief in news articles from low-quality sources. This, for the most part, holds when both full articles and headlines/ledes are being evaluated. Second, we find that access to the full article, rather than just the headline/lede, improves the ability of an individual to correctly discern the veracity of news. Finally, external information (in our case, online research through a search engine) increases belief in both true and false/misleading news articles. Worryingly, the effect on false/misleading news is of a similar magnitude to the effect for true news.
-
An Externally Valid Method for Assessing Belief in Popular Fake News
To test if integrating advances in research on the consumption of fake news into survey instruments measuring belief in fake news changes our understanding of belief in fake news, we fielded three studies in which we repeatedly asked representative samples of Americans to evaluate popular full articles from non-credible and credible sources chosen by a pre-registered algorithm within 24-48 hours of their publication. By sourcing popular fake news articles without researcher selection and asking respondents to evaluate the full articles in the period news consumers are exposed to them, we find that, on average, false or misleading articles are rated as true 33.2% of the time; moreover, approximately 90% of individuals coded at least one false or misleading article as true when given a set of four false or misleading articles. Strikingly, these results are much higher than statistics reported in previous studies.