close
close

topicnews · September 20, 2024

AI exposes conspiracy theories better than we can

AI exposes conspiracy theories better than we can

The scientists themselves were surprised to find that they could instruct a version of ChatGPT to gently dissuade people from believing in conspiracy theories – such as the idea that Covid-19 was a deliberate attempt at population control or that 9/11 was an inside job.

The most important revelation was not about the power of AI, but about how the human mind works. The experiment disproved the widespread myth that we are in a post-truth era where evidence no longer matters. It also contradicted the prevailing view in psychology that people cling to conspiracy theories for emotional reasons and that no amount of evidence can ever dissuade them.

“This is truly the most edifying research I’ve ever done,” said Gordon Pennycook, a psychologist at Cornell University and one of the study’s authors. Study participants were surprisingly receptive to evidence when it was presented in the right way.

The researchers asked more than 2,000 volunteers to talk to a chatbot – GPT-4 Turbo, a large language model (LLM) – about beliefs that could be considered conspiracy theories. The subjects typed their belief into a box and the LLM decided whether it met the researchers’ definition of a conspiracy theory. Participants were asked to rate how certain they were of their beliefs on a scale of 0% to 100%. Then the volunteers were asked for their evidence.

The researchers tasked the LLM with getting people to rethink their views, and to their surprise, it was actually quite effective.

People’s belief in false conspiracy theories fell by an average of 20 percent. For about a quarter of volunteers, belief levels fell from over 50 percent to under 50 percent. “I really didn’t think it would work because I was a firm believer that once you’re down the rabbit hole, you can’t get out,” Pennycook said.

The LLM had some advantages over a human interlocutor. People who strongly believe in conspiracy theories tend to collect mountains of evidence – not qualitative, but quantitative. Most non-believers find it hard to muster the motivation to do the tedious work of keeping up to date. But AI can instantly provide believers with mountains of counter-evidence and point out logical errors in the believers’ claims. It can respond in real time to counter-arguments the user may present.

Elizabeth Loftus, a psychologist at the University of California, Irvine, has studied the power of AI to spread misinformation and even false memories. She was impressed by this study and the significance of the results. She believes that one reason the study worked so well was because it showed the subjects how much information they did not know, thereby reducing their overconfidence in their own knowledge. People who believe in conspiracy theories tend to rate their own intelligence very highly – and the judgment of others less so.

After the experiment, the researchers reported, some of the volunteers said it was the first time anyone or anything had truly understood their beliefs and provided effective counterevidence.

Before the results were published in Science this week, the researchers made their version of the chatbot available for journalists to try out. I primed them with beliefs I’d heard from friends: that the government was covering up the existence of alien life, and that after Donald Trump’s assassination, the mainstream press deliberately avoided saying he’d been shot because reporters feared it would help his campaign. And then, inspired by Trump’s debate comments, I asked the LLM whether immigrants in Springfield, Ohio, eat cats and dogs.

When I made the UFO claim, I cited sightings by military pilots and a National Geographic Channel special as evidence, and the chatbot pointed out some alternative explanations and explained why they were more likely than alien spacecraft. It discussed the physical difficulties of crossing the vast space needed to get to Earth, and questioned whether aliens might be advanced enough to figure this out, but also clumsy enough to be detected by the government.

On whether journalists covered up Trump’s shooting, the AI ​​explained that it goes against the job of a reporter to make assumptions and present them as facts. If there is a series of bangs in a crowd and it’s not yet clear what is happening, then that’s what they need to report – a series of bangs. On the pet consumption in Ohio, the AI ​​explained very well that even if there was an isolated case of someone eating a pet, it would not show a pattern.

That’s not to say that lies, rumors, and deception aren’t important tactics people use to gain popularity and political advantage. When searching social media after the recent presidential debate, many people found that they believed the cat-eating rumor, and what they posted as evidence were repetitions of the same rumor. Gossiping is human.

But now we know that they can be dissuaded with logic and evidence.

FD Flam is a columnist for Bloomberg Opinion and host of the podcast “Follow the Science.”