AI is Changing Politics 🤯💬🔥

AI Persuasion: A Disappointing Reality
A recent, massive study examining political persuasion reveals that artificial intelligence, at best, has a limited effect. Roughly two years ago, Sam Altman suggested that AI systems would demonstrate superhuman persuasive abilities before achieving general intelligence – a prediction that sparked considerable concern regarding the potential impact of AI on democratic elections.

Decoding AI’s Limited Influence
Scientists at the UK AI Security Institute, along with researchers from MIT, Stanford, Carnegie Mellon, and numerous other institutions, conducted the most extensive study of AI persuasiveness to date. The study, involving nearly 80,000 participants in the UK, found that political AI chatbots failed to meet these expectations. However, the research also highlighted more complex issues surrounding our interactions with AI.

Beyond Scale: The Power of Post-Training
One key finding was that the advantage held by larger models, such as ChatGPT or Grok-3 beta, was relatively small, debunking the notion that persuasiveness would necessarily increase with model scale. Researchers found it significantly more successful to train models on a curated database of persuasive dialogues, enabling them to mimic the patterns identified within. This approach outperformed simply adding billions of parameters and increased computing power.

Personalization Doesn’t Guarantee Persuasion
Furthermore, the team explored the impact of incorporating personal data. They compared persuasion scores achieved when models were provided with information regarding participants’ political views, to those where such data was absent. Similarly, scientists tested whether persuasiveness increased when the AI possessed knowledge of the participants’ gender, age, political ideology, or party affiliation – mirroring the findings regarding model scale, these personalized messaging effects were measurable but relatively small.

Facts and Evidence Still Reign Supreme
Scientists deliberately instructed the artificial intelligence models to employ various persuasion tactics, including moral reframing – presenting arguments through the audience’s own moral values – and deep canvassing, which involved extended, empathetic conversations designed to encourage reflection and shifts in opinion. The resulting persuasiveness was then compared to models prompted solely to provide facts and evidence to support their claims, as well as those given no specific persuasion methods. The approach utilizing facts and evidence proved most effective, achieving a clear advantage over the baseline strategy.

AI’s Struggle with Complexity
Dialogue prompted the AIs to increase their persuasiveness, leading them to request further amplification. Hackenburg and his colleagues observed that as the AIs utilized more factual statements, their accuracy diminished, with the models increasingly prone to misrepresentation and fabrication.

A Cautionary Note on AI’s Reach
The study’s most significant question mark concerns the high level of participant engagement needed to achieve the observed persuasion scores, raising the possibility that the AI’s influence was artificially inflated.