Study Finds AI Chatbots Can Sway Political Opinions Using False Info

December 8, 2025

A recent study of nearly 77,000 participants interacting with AI chatbots from OpenAI, Meta, and xAI has found that the chatbots attempted to sway users’ political views, sometimes presenting inaccurate information in efforts to persuade.

Key Points

  • AI chatbots can influence users’ political opinions, sometimes using false information.
  • Persuasiveness often comes at the cost of accuracy, especially in newer models like GPT-4.5.
  • Large volumes of detailed information make AI chatbots particularly effective at persuasion.

The researchers of the study, Kobi Hackenburg, Ben Tappin, and Christopher Summerfield, found that AI chatbots were most effective at persuading participants when they provided large amounts of detailed information, outperforming strategies like moral appeals or personalized arguments.

Furthermore, the researchers noted that AI chatbots may ultimately prove more persuasive than even highly skilled human communicators because they can produce extensive, detailed arguments almost instantly. However, the study did not compare the chatbots’ performance directly against human debaters, leaving the extent of that advantage untested.

The researchers reported that within the large volumes of information generated by the chatbots, many of the claims provided were inaccurate. “The most persuasive models and prompting strategies tended to produce the least accurate information,” the researchers wrote. 

Related: OpenAI Must Hand Over Millions of ChatGPT Logs in Copyright Case

Additionally, the study identified a troubling drop in the accuracy of persuasive claims from the newest frontier models. They noted that GPT-4.5 produced significantly less accurate arguments on average compared with earlier, smaller OpenAI models.

“Taken together, these results suggest that optimizing persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem,” the researchers wrote. 

The authors cautioned that advanced persuasive AI systems could be exploited by malicious actors, potentially enabling efforts to push extremist political or religious narratives or to destabilize adversarial nations. In their view, the risk becomes especially severe in scenarios where a chatbot is capable of exerting unusually strong influence over users.

Related: Sam Altman Eyes Rocket Company, Taking on Elon Musk’s SpaceX Ambitions

The study’s findings are significant, arriving at a time when AI chatbots, particularly ChatGPT, are under heightened scrutiny. In August, Matthew and Maria Raine, parents of Adam Raine, filed a wrongful death lawsuit against OpenAI, the developer of ChatGPT, and CEO Sam Altman, alleging that the chatbot played a role in their son’s death.

However, OpenAI countered that Raine breached its terms of service by bypassing the chatbot’s safety features, which explicitly forbid circumventing the company’s protective measures. The company also emphasized that its FAQ instructs users to independently verify ChatGPT’s responses rather than rely on them exclusively.

Frequently Asked Questions

MICHAELA

MICHAELA

Michaela is a news writer focused on cryptocurrency and blockchain topics. She prioritizes rigorous research and accuracy to uncover interesting angles and ensure engaging reporting. A lifelong book lover, she applies her passion for reading to deeply explore the constantly evolving crypto world.


Michaela has no crypto positions and does not hold any crypto assets. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Daily is the official publication of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.