A mockup illustrating two posts in a feed: one is labelled as "misleading" whereas the one beneath it has a less severe label of "Stay informed".

Labeling misleading content

Reducing the spread and impact of misleading content

Our Confidence Rating

Convincing

Share This Intervention

What It Is

Any label or text directly pointing out that content is potentially misleading.

Civic Signal Being Amplified

Understand
:
Show reliable information

When To Use It

Interactive

What Is Its Intended Impact

Reduce belief in and willingness to spread misleading or false content.

Evidence That It Works

Evidence That It Works

Across a number of survey experiment studies, researchers find that labeling misleading content with warnings based on professional fact-checkers' evaluations - for example “Disputed,” “Rated false,” or “False information checked by independent fact-checkers” - can reduce participants belief in false news or their reported willingness to share misinformation. One study, discussed below, finds similar results for content marked as false by community members. Labels, however, should be used carefully as several studies suggest they have the potential to backfire.

One survey experiment (Mena, 2020) tests if Facebook users are more likely to say they would share news stories depending on whether a story is flagged as disputed by professional fact checkers. Participants observed a Facebook post containing false news content with or without a warning label (“Disputed by Snopes.com and PolitiFact”). Respondents who observed the label on top of the post had significantly lower intentions to share the deceiving information on social media than respondents who did not see the label (Cohen’s d = −0.36). (Note: we report effect sizes using the metrics in the authors' paper. All effects we include are statistically significant, unless otherwise stated.)

Several studies show similar positive effects on how much subjects believe false information when it is flagged based on professional fact checker ratings. For example, in Clayton et al. (2020) participants observed a replica of a Facebook newsfeed containing false and true news stories, with warnings attached to the misleading posts. The authors found that “disputed” tags reduced the proportion of respondents who considered claims to be somewhat or very accurate by 10% (Cohen’s d = 0.26), and “rated false” tags reduced it by 13% (Cohen’s d = 0.38). Gaozhao (2021) similarly found that when participants were asked to rate the accuracy of headlines that had been tagged by a professional fact-checker (Figure 1, left), it increased their recognition of false news (Cohen’s d = 0.80). Finally, Porter and Wood (2022) tested the effect of fact-checking labels on content that participants had previously seen without any tag or label. They likewise saw an increase in the accuracy of evaluations when the correction label was added on second exposure: +0.62 points on a 5-point scale. 

Despite the encouraging findings above, the effectiveness of fact-checking labels is subject to several limitations. One is the so-called implied truth effect: the presence of warnings on some news headlines implies to some participants that unlabeled content is more likely to be true (Pennycook et al., 2020). In contrast to the implied truth effect, other research suggests that because false news is extremely rare compared to true news, the presence of warnings may actually have the opposite effect, namely increasing general skepticism about all news (Hoes et al., 2023; van der Meer et al., 2023). Taken together, evidence suggests that labels may have negative indirect effects on unlabeled content, but that these effects may depend on the prevalence of false content.

Crowd-source fact-checking

Because professional fact-checkers may not be the most persuasive approach to combating misinformation, as individuals have different preferences for what news should be checked (Rich et al., 2020), a potential alternative to professional fact checking is the use of crowdsourced fact-checking, namely recruiting platform users to act as a first line of evaluation of news content (Colliander 2019).

There has only been limited research, however, on the relative efficacy of peer-based fact-checking. An exception is the above-mentioned study by Gaozhao (2021), which found that crowdsourced fact-checking labels were as effective as professional fact checkers’ on the recognition of false news (Cohen’s d = 0.65; Figure 1, right). (Another related intervention that uses crowdsourced “Community Notes” on misleading posts has similarly demonstrated to be effective in reducing the spread of misinformation.)

In sum, research on labeling misleading content points to its potential effectiveness. Nevertheless, there are possible limitations: 

  • Labels may have the undesirable indirect effect that, depending on the prevalence of fake news, they either increase the credibility of potentially false but unlabeled content (implied truth effect) or increase general skepticism toward all news.
  • Corrections by professional fact-checkers may not always be well received and can lead to partisan division–as a result, crowdsourced sources of correction may be preferable; 

Finally, it should be noted again that the research above was conducted either in survey experiments or simulated social media environments, which limits how much confidence we can have that they are effective when integrated on real platforms.

Why It Matters

Since social media users often do not investigate the veracity of headlines and claims they see in their feeds, it is helpful to provide users with additional context or information about the claim and prevent them from simply becoming familiar with it.

Special Considerations

Examples

This intervention entry currently lacks photographic evidence (screencaps, &c.)

Citations

Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact‑Check Tags in Reducing Belief in False Stories on Social Media

Authors

Katherine Clayton, Spencer Blair, Jonathan Busam, Samuel Forstner, & al.

Journal

Political Behavior

Date Published

December 15, 2020

Paper ID (DOI, arXIV, &c.)

10.1007/s11109-019-09533-0

The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings

Authors

Gordon Pennycook, Adam Bear, Evan T. Collins, David G. Rand

Journal

Management Sciences

Date Published

February 21, 2020

Paper ID (DOI, arXIV, &c.)

10.2139/ssrn.3035384

“This is fake news”: Investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media.

Authors

Jonas Colliander

Journal

Computers in Human Behavior

Date Published

August 1, 2019

Paper ID (DOI, arXIV, &c.)

Flagging fake news on social media: An experimental study of media consumers' identification of fake news.

Authors

Dongfang Gaozhao

Journal

Government Information Quarterly

Date Published

July 1, 2021

Paper ID (DOI, arXIV, &c.)

https://doi.org/10.1016/j.giq.2021.101591

Prominent Misinformation Interventions Reduce Misperceptions but Increase Skepticism

Authors

Emma Hoes, Brian Aitken, Jingwen Zhang, Tomasz Gackowski, and Magdalena Wojcieszak

Journal

ArXiV

Date Published

Paper ID (DOI, arXIV, &c.)

Cleaning up social media: The effect of warning labels on likelihood of sharing false news on Facebook

Authors

Paul Mena

Journal

Policy & internet

Date Published

July 28, 2019

Paper ID (DOI, arXIV, &c.)

Political Misinformation and Factual Corrections on the Facebook News Feed: Experimental Evidence.

Authors

Ethan Porter and Thomas J. Wood

Journal

The Journal of Politics

Date Published

Paper ID (DOI, arXIV, &c.)

Can fighting misinformation have a negative spillover effect? how warnings for the threat of misinformation can decrease general news credibility.

Authors

Toni GLA van der Meer, Michael Hameleers, and Jakob Ohme

Journal

Journalism Studies

Date Published

March 16, 2023

Paper ID (DOI, arXIV, &c.)

Citing This Entry

Prosocial Design Network (2024). Digital Intervention Library. Prosocial Design Network [Digital resource]. https://doi.org/10.17605/OSF.IO/Q4RMB

Entry Last Modified

May 10, 2024 6:54 AM
Back to Library