skip to main content
Countering Misinformation With News Source Ratings

Countering Misinformation With News Source Ratings

by Zacc Ritter

Gallup and the John S. and James L. Knight Foundation conducted a web-based experiment to assess the effectiveness of a news source rating tool designed to help online news consumers discriminate between real news and misinformation. The tool identifies news organizations as reliable (using a green cue) or unreliable (using a red cue) based on evaluations of their work, funding and other factors by experienced journalists. Participants were asked to rate the perceived accuracy of 12 headlines -- six real and six false.

Participants were randomly assigned to three experimental conditions.

  • Control: see headline, image, news source name and a one sentence description of the news story
  • Treatment 1: same information as control group, but each article also includes one of three source cue ratings -- green ("reliable"), red ("unreliable") and a third designated "not yet rated"
  • Treatment 2: same information as Treatment 1 but with an additional icon allowing users to click to see more information about why the news source received a red or green rating

This experimental design isolates the effect that cues provided by the source rating tool have on the way users perceive the accuracy of real and false news content.

The results show that the perceived accuracy of headlines was significantly higher for real news stories with a green source cue and significantly lower for false stories with a red source cue when compared with the control group. The perceived accuracy of false stories with a "not yet rated" cue was slightly lower.

Importantly, the source rating tool was effective across the political spectrum. Since partisanship colors the way individuals perceive news content, the experiment included three pro-Republican/anti-Democratic and three pro-Democratic/anti-Republican false headlines. Unsurprisingly, Republicans and Democrats rated false stories that aligned with their party's typical positions as more accurate than false stories that did not. Surprisingly, perceived accuracy decreased most in the treatment conditions when the false headlines matched the typical positions of the users' party than when they did not match.

Another key takeaway was the importance of linking the source ratings to experienced journalists. Prior to rating the news headlines, all participants were told the ratings were developed by a team of experienced journalists. After completing the tasks, participants were asked to recall how the source ratings were developed. Users who correctly recalled that journalists created the source ratings perceived real headlines with a green cue as much more accurate and false headlines with a red cue as much less accurate than did users who did not remember. In other words, the source rating tool seems more effective in delegitimizing false stories when participants associate source ratings as coming from experienced journalists.

For more detail on the effect of a news source rating system, read the full report.

Gallup and Knight Foundation acknowledge support for this research provided by the Ford Foundation, the Bill & Melinda Gates Foundation and the Open Society Foundations.

##SPEEDBUMP##


Gallup https://news.gallup.com/opinion/gallup/236123/countering-misinformation-news-source-ratings.aspx
Gallup World Headquarters, 901 F Street, Washington, D.C., 20001, U.S.A
+1 202.715.3030