MIT Study Shows AI Chatbots Can Cut Belief in Conspiracy Theories by 20%

0
12

The rise of conspiracy theories on the internet has become a major concern, with some theories leading to significant harm and misinformation. A recent study from MIT Sloan School of Management and Cornell University suggests that AI chatbots could be a powerful tool in combating these false beliefs. The study, published in Science, shows that engaging in conversations with large language models (LLMs) like GPT-4 Turbo can reduce belief in conspiracy theories by approximately 20%.

How AI Chatbots Work

Researchers, including Dr Yunhao Zhang from the Psychology of Technology Institute and Thomas Costello of MIT Sloan, tested the effectiveness of AI chatbots by engaging 2,190 participants in text conversations about their favourite conspiracy theories. The AI was programmed to provide persuasive, fact-based counterarguments tailored to each theory. Participants who interacted with the chatbots reported a significant decrease in their belief in these theories, as per the study.

Accuracy and Future Implications

The study also ensured the accuracy of the chatbot’s responses by having a professional fact-checker review the claims made. Nearly all (99.2%) of the claims were accurate, showcasing the reliability of the information provided by the AI. The findings suggest that AI chatbots could be utilised on various platforms to challenge misinformation and encourage critical thinking among users.

Next Steps

While the results are promising, further research is needed to explore the long-term effectiveness of chatbots in changing beliefs and addressing different types of misinformation. Researchers like Dr David G. Rand and Dr Gordon Pennycook highlight the potential of integrating AI into social media and other forums to enhance public education and counteract harmful conspiracy theories.

 

unsplash.com/photos/a-computer-chip-with-the-word-gat-printed-on-it-Fc1GBkmV-Dw

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here