A review of the evidence from PDN's Design Library.
In 2016, online misinformation — previously of interest to only a small set of researchers — became widely recognized as a societal danger that could flip elections, incite violence, and even exacerbate global health pandemics. It may be unsurprising, then, that hundreds of research papers have been published since, proposing and testing ways to reduce misinformation (as well as countless papers on how to identify misinformation). There have even been an impressive number of meta-analyses and review papers published - at least 13, by our count.
PDN's library team has kept up with the flood of research by focusing on the - much smaller - set of studies we see as actionable, in that they either test interventions that platforms could reasonably implement, or they test interventions in the field (as opposed to in a survey experiment). Or ideally do both. Although we haven’t reviewed every misinformation paper (with 250+ that would prove impractical), after reviewing a substantial number, and weighing the evidence for each type of intervention, we thought it fitting to write up an overview of the most promising interventions.
Below, you'll see where we think the evidence stands for six strategies used to reduce the spread and impact of misinformation. (More info on how our ratings work.)
We’ll state the obvious: this assessment is based on existing public research. We assume that private companies have tested interventions internally, but those tests remain inaccessible. (Naturally, we invite platforms to share that research with us and the wider public.) More encouraging, new public research is constantly being produced, updating our knowledge of what design patterns are most effective. As that important work unfolds, we will do our best to update our assessments below.
Rating: Convincing.
What it is: Accuracy prompts are messages delivered as text, image or video on a platform that remind users about the importance of accuracy. They presuppose that people have multiple motivations when they read and share information, one being to have an accurate understanding of the world. Accuracy prompts aim to activate that motivation, making users more apt to question if information they see is true.
Evidence they work: Several lab studies, i.e., survey experiments, suggest that accuracy prompts have the potential to reduce how much users share misinformation. But what convinced us that they are an effective tool is a study that conducted field experiments on Facebook and Twitter/X which used ads to expose users to an assortment of accuracy messages. That study found that exposure to accuracy prompts could reduce users' resharing of misinformation by about 6%.
Limitations: No substantial ones that we could find. And there are actually a few upsides to using this intervention. For one, using accuracy nudges doesn't require that platforms identify misinformation, which has its challenges. Relatedly, accuracy prompts are "content neutral", so they can avoid potential bias in classifiers - and the critique that comes with it. Finally, a wide variety of messages appear to be equally effective, attesting to their general effectiveness.
Rating: Convincing.
What it is: Community Notes are crowd-sourced notes that are displayed alongside misleading posts to give users useful context. A key feature is that Community Notes only appear when a diverse group of volunteers find the note "helpful".
Evidence they work: Two separate teams of researchers conducted quasi-experimental studies using publicly available X/Twitter data, both arriving at similar conclusions: once a Community Note was displayed on a post, retweets of that post dropped by 50-60%. However, given that it normally takes hours for a Community Note to be displayed, and on average 80% of retweets have already occurred by then, the researchers estimate that a Community Note overall only reduces reshares of a misleading post by about 10%.
Limitations: Because they depend on volunteers, Community Notes have an inherent scalability problem. As noted above, they often are only displayed after a tweet's resharing damage has been done, and they are only displayed on a subset of misleading posts. That said, as platforms innovate on how to scale their use and focus on posts with the greatest potential impact, for example, they may become a more effective tool to reduce misinformation.
Rating: Likely.
What it is: Education material - delivered as text, video or interactive games - that attune users to common false news strategies and so "inoculate" them from being susceptible to misinformation. (Note: often also referred to as "inoculation".)
Evidence it works: One field experiment testing 90-second ads on YouTube and a quasi-experiment testing an online inoculation game show that these strategies can effectively teach awareness of misinformation strategies. A third quasi-experiment, that exposed users to 10-15 minute videos, showed that misinformation literacy can improve the quality of news that users share on Twitter.
Limitations: While misinformation literacy interventions show promise and are potentially scalable, they may have a "reach" problem in that they require some degree of user attention, perhaps at least 90 seconds which is the length of the shortest intervention tested.
Rating: Tentative.
What it is: Adding a label to a posted news story stating it has been identified as false by a fact-checking organization.
Evidence they work: Several lab studies (i.e. survey experiments) show that when study participants see a post with a fact-check label they are less likely to say they would share it. Yet one large-scale study conducted in the Global South on Facebook Messenger finds that, while accuracy nudges reduce intention to share misinformation, fact-checking labels are ineffective.
Limitations: Like Community Notes, labeling stories that have been fact-checked have an inherent scalability limitation; they can only be used for news that has been fact-checked. Other downsides to labeling misinformation that have emerged in studies are its potential to backfire or have negative spillover effects; fact-checking labels can leave the impression either that posts without a label are necessarily true or, conversely, may increase users' general skepticism that any news can be believed.
Rating: Tentative
What it is: Messages that forewarn users of - and refute - specific common misinformation narratives and stories. (Note: For interventions that attune users to misinformation strategies, see "Misinformation literacy" above.)
Evidence it works: Three quasi-experiments in the field tested the effects of participants using services - delivered via WhatsApp, a custom mobile app or a website - that updated them on false stories circulating in online media. Participants who were offered or took part in these services were more likely to be able to identify misinformation compared to control groups.
Limitations: Even more so than Misinformation Literacy, pre-bunking interventions may have a reach deficit, in that - at least among existing field studies - they require individuals to voluntarily subscribe to information services.
Rating: Tentative
What it is: Including a label on posted news articles indicating the trustworthiness of the news source.
Evidence it works: Two survey experiments demonstrate that source credibility cues may decrease users' inclination to share news from untrustworthy stories. Yet, a field experiment that nudged users to download Newsguard, a browser extension that adds news credibility labels across social media platforms, showed minimal effects of credibility labels.
Limitations: None that have been identified by researchers, other than the minimal observed effects in the field experiment. However, it's possible that source credibility labels could share similar backfire or spillover effects to Labeling Misinformation interventions.
As noted above, our state of knowledge is limited by what research is public - and continues to evolve as new studies are published. Since drafting this primer, for example, we learned of a study (Ershov & Morales, 2024) which evaluated Twitter's 2020 intervention forcing users to add a comment when sharing news articles, designed to add friction to news sharing. We also know that other types of friction have been proposed as a way to reduce re-shares of questionable news articles, but we've yet to see a study. If you know of any studies - or interventions - we've missed, let us know!
The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.
A donation for as little as $1 helps keep our research free to the public.