A person holding a cellphone with ChatGPT open

How ChatGPT Can Combat Conspiracy Theories

9/16/2024

Usually, convincing a conspiracy theorist that the moon landing wasn’t staged or that the Earth is indeed round is a futile effort, but ChatGPT might have better luck changing their views. New research from psychologists at Cornell, American University, and Massachusetts Institute of Technology suggests that debating with a sympathetic chatbot may help pluck people who believe in conspiracy theories out of the rabbit hole. 

In “Durably Reducing Conspiracy Beliefs Through Dialogues With AI, published in the journal Science, researchers show that conversing with a chatbot can weaken people’s belief in a given conspiracy theory by an average of 20 percent. Additionally, it concluded that these reductions last for at least two months. The conversations even curbed the strength of conviction, though to a lesser degree, for people whose worldviews are centered around conspiracy theories. 

This finding challenges the notion that people who believe in conspiracy theories rarely change their mind. Dr. Thomas Costello, a co-author of the study from American University, stated that the study contradicts previous findings that people adopt such beliefs to fulfill deep-seated psychological needs, rendering them impervious to facts and logic. Instead, the study finds that a simpler explanation was at play: people just hadn’t been exposed to convincing-enough evidence.

The key to pulling conspiracy theorists out of the rabbit hole seems to be an AI system that can produce conversations that encourage critical thinking and provide customized, fact-based arguments. As Costello states, “the AI knew in advance what the person believed and… was able to tailor its persuasion to their precise belief system.” 

In the study, participants engaged in three rounds of back-and-forth conversation with the AI system about their conspiracy theory or a non-conspiracy topic. Roughly 60 percent of participants discussed their conspiracy theory, with each conversation lasting about 8 minutes. The researchers directed the chatbot to talk the participant out of their belief. In most cases, the AI could only “chip away” and make people “a bit more skeptical and uncertain” about their beliefs, but a select few were “disabused of their conspiracy entirely.” 

This effectiveness was not limited to specific types of conspiracy theories. The AI successfully challenged a wide spectrum of beliefs, including conspiracies that hold strong political and social salience, like those involving COVID-19 or voter fraud during the 2020 election. Dr. Gordon Pennycook, an associate professor of psychology at Cornell, states that the study has “implications beyond just conspiracy theories” as a “number of beliefs based on poor evidence could… be undermined using this approach.”

Despite the success demonstrated in the study, Professor Sander van der Linden of the University of Cambridge questions whether people would voluntarily engage with such an AI in the real world. There are also questions about the ethicality of the strategies employed by the AI to convince the conspiracy believers. 

The researchers note the need for continued responsible AI deployment, as the technology could potentially be used to convince users to believe in conspiracies as well as to abandon them. Nevertheless, AI tools could be used to provide significant positive impacts. For example, if users were searching for conspiracy related terms, AI could be integrated into search engines to provide accurate information. Overall, even if generative AI has the potential to supercharge disinformation, this study shows that it can also be part of the solution.