Business

Chats with AI bots found to damp conspiracy theory beliefs


Stay informed with free updates

Conspiracy theorists who debated with an artificial intelligence chatbot became more willing to admit doubts about their beliefs, according to research that offers insights into dealing with misinformation.

The greater open-mindedness extended even to the most stubborn devotees and persisted long after the dialogue with the machine ended, scientists found.

The research runs counter to the idea that it is all but impossible to change the mind of individuals who have dived down rabbit holes of popular but unevidenced ideas.

The findings are striking because they suggest a potential positive role for AI models in countering misinformation, despite their own vulnerabilities to “hallucinations” that sometimes cause them to spread falsehoods.

The work “paints a brighter picture of the human mind than many might have expected” and shows that “reasoning and evidence are not dead”, said David Rand, one of the researchers on the work published in Science on Thursday.

“Even many conspiracy theorists will respond to accurate facts and evidence — you just have to directly address their specific beliefs and concerns,” said Rand, a professor at the Massachusetts Institute of Technology’s Sloan School of Management.

“While there are widespread legitimate concerns about the power of generative AI to spread disinformation, our paper shows how it can also be part of the solution by being a highly effective educator,” he added.

The researchers examined whether AI large language models such as OpenAI’s GPT-4 Turbo could use their ability to access and summarise information to address persistent conspiratorial beliefs. These included that the September 11 2001 terrorist attacks were staged, the 2020 US presidential election fraudulent and the Covid-19 pandemic orchestrated.  

Almost 2,200 participants shared conspiratorial ideas with the LLM, which generated evidence to counter the claims. These dialogues cut the person’s self-rated belief in their chosen theory by an average of 20 per cent for at least two months after talking to the bot, the researchers said.

A professional fact-checker assessed a sample of the model’s own output for accuracy. The verification found 99.2 per cent of the LLM’s claims to be true and 0.8 per cent misleading, the scientists said.

The study’s personalised question-and-answer approach is a response to the apparent ineffectiveness of many existing strategies to debunk misinformation.

Another complication with generalised efforts to target conspiratorial thinking is that actual conspiracies do happen, while in other cases sceptical narratives may be highly embellished but based on a kernel of truth.

One theory about why the chatbot interaction appears to work well is that it has instant access to any type of information, in a way that a human respondent does not.

The machine also dealt with its human interlocutors in polite and empathetic terms, in contrast to the scorn sometimes heaped on conspiracy theorists in real life.

Other research, however, suggested the machine’s mode of address was probably not an important factor, Rand said. He and his colleagues had done a follow-up experiment in which the AI was prompted to give factual correction “without the niceties” and it worked just as well, he added.

The study’s “size, robustness, and persistence of the reduction in conspiracy beliefs” suggested a “scalable intervention to recalibrate misinformed beliefs may be within reach”, according to an accompanying commentary also published in Science.

But possible limitations included difficulties in responding to new conspiracy theories and in coaxing people with low trust in scientific institutions to interact with the bot, said Bence Bago from the Netherlands’ Tilburg University and Jean-François Bonnefon of the Toulouse School of Economics, who authored the secondary paper together.

“The AI dialogue technique is so powerful because it automates the generation of specific and thorough counter-evidence to the intricate arguments of conspiracy believers and therefore could be deployed to provide accurate, corrective information at scale,” said Bago and Bonnefon, who were not involved in the research.

“An important limitation to realising this potential lies in delivery,” they added. “Namely, how to get individuals with entrenched conspiracy beliefs to engage with a properly trained AI program to begin with.”



Source link

Back to top button