What It Is
Any label or text directly pointing out that content is potentially misleading.
Civic Signal Being Amplified
When To Use It
What Is Its Intended Impact
Reduce belief in and willingness to spread misleading or false content.
Evidence That It Works
Evidence That It Works
Across a number of survey experiment studies, researchers find that labeling misleading content with warnings based on professional fact-checkers' evaluations - for example “Disputed,” “Rated false,” or “False information checked by independent fact-checkers” - can reduce participants belief in false news or their reported willingness to share misinformation. One study, discussed below, finds similar results for content marked as false by community members. Labels, however, should be used carefully as several studies suggest they have the potential to backfire.
One survey experiment (Mena, 2020) tests if Facebook users are more likely to say they would share news stories depending on whether a story is flagged as disputed by professional fact checkers. Participants observed a Facebook post containing false news content with or without a warning label (“Disputed by Snopes.com and PolitiFact”). Respondents who observed the label on top of the post had significantly lower intentions to share the deceiving information on social media than respondents who did not see the label (Cohen’s d = −0.36). (Note: we report effect sizes using the metrics in the authors' paper. All effects we include are statistically significant, unless otherwise stated.)
Several studies show similar positive effects on how much subjects believe false information when it is flagged based on professional fact checker ratings. For example, in Clayton et al. (2020) participants observed a replica of a Facebook newsfeed containing false and true news stories, with warnings attached to the misleading posts. The authors found that “disputed” tags reduced the proportion of respondents who considered claims to be somewhat or very accurate by 10% (Cohen’s d = 0.26), and “rated false” tags reduced it by 13% (Cohen’s d = 0.38). Gaozhao (2021) similarly found that when participants were asked to rate the accuracy of headlines that had been tagged by a professional fact-checker (Figure 1, left), it increased their recognition of false news (Cohen’s d = 0.80). Finally, Porter and Wood (2022) tested the effect of fact-checking labels on content that participants had previously seen without any tag or label. They likewise saw an increase in the accuracy of evaluations when the correction label was added on second exposure: +0.62 points on a 5-point scale.
Despite the encouraging findings above, the effectiveness of fact-checking labels is subject to several limitations. One is the so-called implied truth effect: the presence of warnings on some news headlines implies to some participants that unlabeled content is more likely to be true (Pennycook et al., 2020). In contrast to the implied truth effect, other research suggests that because false news is extremely rare compared to true news, the presence of warnings may actually have the opposite effect, namely increasing general skepticism about all news (Hoes et al., 2023; van der Meer et al., 2023). Taken together, evidence suggests that labels may have negative indirect effects on unlabeled content, but that these effects may depend on the prevalence of false content.
Crowd-source fact-checking
Because professional fact-checkers may not be the most persuasive approach to combating misinformation, as individuals have different preferences for what news should be checked (Rich et al., 2020), a potential alternative to professional fact checking is the use of crowdsourced fact-checking, namely recruiting platform users to act as a first line of evaluation of news content (Colliander 2019).
There has only been limited research, however, on the relative efficacy of peer-based fact-checking. An exception is the above-mentioned study by Gaozhao (2021), which found that crowdsourced fact-checking labels were as effective as professional fact checkers’ on the recognition of false news (Cohen’s d = 0.65; Figure 1, right). (Another related intervention that uses crowdsourced “Community Notes” on misleading posts has similarly demonstrated to be effective in reducing the spread of misinformation.)
In sum, research on labeling misleading content points to its potential effectiveness. Nevertheless, there are possible limitations:
- Labels may have the undesirable indirect effect that, depending on the prevalence of fake news, they either increase the credibility of potentially false but unlabeled content (implied truth effect) or increase general skepticism toward all news.
- Corrections by professional fact-checkers may not always be well received and can lead to partisan division–as a result, crowdsourced sources of correction may be preferable;
Finally, it should be noted again that the research above was conducted either in survey experiments or simulated social media environments, which limits how much confidence we can have that they are effective when integrated on real platforms.
Why It Matters
Since social media users often do not investigate the veracity of headlines and claims they see in their feeds, it is helpful to provide users with additional context or information about the claim and prevent them from simply becoming familiar with it.