September 19, 2024


Whether it’s the misconception that the moon landings never happened or the false claim that Covid props contain microchips, conspiracy theories abound, sometimes with dangerous consequences.

Now researchers have found that such beliefs can be changed through a chat with artificial intelligence (AI).

“Conventional wisdom would tell you that people who believe in conspiracy theories rarely, if ever, change their minds, especially in light of evidence,” said Dr. Thomas Costello, a co-author of the American University study.

This, he added, is thought to be attributed to people adopting such beliefs to satisfy various needs – such as a desire for control. However, the new study offers a different point of view.

“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’ and begun to believe a conspiracy theory,” the team wrote.

The researchers said the approach is crucial to an AI system that can draw on a wide variety of information to produce conversations that encourage critical thinking and provide tailored, fact-based counterarguments.

“The AI ​​knew in advance what the person believed and because of that it was able to adapt its belief to their exact belief system,” Costello said.

Write in the journal ScienceCostello and colleagues reported how they conducted a series of experiments involving 2,190 participants with a belief in conspiracy theories.

While the experiments varied slightly, all participants were asked to describe a specific conspiracy theory they believed and the evidence they thought supported it. It was then imported An AI system called “DebunkBot”.

Participants were also asked to rate on a 100-point scale how true they thought the conspiracy theory was.

They then knowingly engaged in a three-round back-and-forth conversation with the AI ​​system about their conspiracy theory or a non-conspiracy topic. Afterwards, participants again rated how true they thought their conspiracy theory was.

The results revealed that those who discussed non-conspiracy topics only slightly lowered their “truth” rating afterward. However, those who discussed their conspiracy theory with AI showed an average 20% drop in their belief that it was true.

The team said the effects appeared to last for at least two months, while the approach worked for almost all types of conspiracy theory – though not the ones that were true.

skip past newsletter promotion

The researchers added that the size of the effect depended on factors including how important the belief was to the participant and their trust in AI.

“About one in four people who started the experiment believing a conspiracy theory came out the other side without that belief,” Costello said.

“In most cases, the AI ​​can only hack away – making people a little more skeptical and uncertain – but a select few have been completely stripped of their conspiracy.”

The researchers added that reducing belief in one conspiracy theory appeared to reduce participants’ belief in other such ideas, at least to a small extent, while the approach could have real-world applications—for example, AI could respond to posts with conspiracy related. theories on social media.

Prof Sander van der Linden from the University of Cambridge, who was not involved in the work, questioned whether people would voluntarily engage with such AI in the real world.

He also said it was unclear whether similar results would be found if participants had spoken to an anonymous human, while there were also questions about how the AI ​​convinced conspiracy believers, as the system also used strategies such as empathy and confirmation.

But, he added: “Overall, this is a very novel and potentially important finding and a good illustration of how AI can be used to fight misinformation.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *