The Facebook logo is seen on a mobile device in this photo illustration on January 20, 2019. The Federal Trade Commission is considering fining the social media giant with a record fine for violating failing to protect user data according to the Washington Post. (Photo by Jaap Arriens/NurPhoto via Getty Images)

In the war against misinformation, fact-checking works. Big Tech needs to do more of it

Updated 1506 GMT (2306 HKT) May 28, 2019

Chat with us in Facebook Messenger. Find out what's happening in the world as it unfolds.

Jeremy Cone is an assistant professor of psychology at Williams College. Melissa J. Ferguson is a professor of psychology and senior associate dean of social sciences at Cornell. Kathryn Flaharty, laboratory manager at the Developmental Cognitive Neuroscience Lab at Georgetown University, contributed to the research. The opinions expressed in this commentary are their own.

Perspectives cone ferguson

Amid a continued public outcry over the influence of fake news and misinformation, tech companies are scrambling to generate effective solutions. Just last month, Mark Zuckerberg testified about what Facebook was doing to address the safety of users' private information, and there continues to be calls for social media companies to do more to curb the spread of misinformation.

Facebook's approach has been to employ fact-checkers to help identify dubious content. Although questions remain about how fact-checkers can successfully identify misinformation, vetting the truth of online content is a critically important strategy for stemming the tide of misinformation.
Our new research shows that fact-checking prevents misinformation from shaping our thoughts — even our automatic and uncontrollable perceptions. When fact-checking calls out what isn't credible, much of the influence that may have been done to our perceptions is undone. Fact-checking works, if done properly, and it needs the support of tech companies.
In a paper we recently published, we focused on how new information about an individual affects participants' opinions and feelings about that individual. Across seven experiments with over 3,100 participants, we determined not only their consciously reported feelings, but also their automatic, gut-level reactions.
In one set of experiments, participants learned a considerable amount of positive information about a stranger named Kevin. Next, they discovered that he had been arrested several years ago for domestic abuse of his ex-wife. In between, we measured participants' automatic, gut-level feelings toward him. We used a computer-based measure that presents an image of the person (Kevin) very quickly, and then another neutral, unrelated and ambiguous image that the subject is asked to rate as pleasant or not (e.g., a Chinese ideograph that we ensure none of the participants can actually read). Across many trials, we measured whether the presence of Kevin (vs. some other strang