Could ChatGPT Secretly Tell You How to Vote?
In a groundbreaking study published in *Nature*, researchers explored the influence of AI chatbots on political preferences during the lead-up to last year’s U.S. presidential election. More than 2,000 Americans participated in an experiment where they interacted with chatbots advocating for either Kamala Harris or Donald Trump. The results were striking: approximately one in 35 participants who initially opposed Trump changed their minds after conversing with a pro-Trump bot, while the figure was even more pronounced for the pro-Harris bot, with one in 21 participants flipping their vote. These findings suggest that AI has significant potential to shape public opinion, with David Rand, a senior author of the study, emphasizing that AI creates “a lot of opportunities for manipulating people’s beliefs and attitudes.” The researchers extended their investigation to national elections in Canada and Poland, where roughly one in ten participants reported changing their vote after interacting with the chatbots, further underscoring the persuasive power of AI in political contexts.
The study also revealed that the effectiveness of these chatbots did not hinge on advanced rhetorical skills or personalized interactions. Instead, their success lay in the sheer volume of “fact-like claims” they presented, regardless of the accuracy of those claims. The most convincing chatbots were those that could deliver the most evidence in support of their arguments, highlighting a concerning trend where persuasive power is divorced from factual accuracy. Independent experts noted that while the findings are intriguing, they raise questions about the broader implications of AI in political discourse. Traditional campaign methods, such as mail and television ads, have often fallen short in swaying voters, but AI could prove to be a more effective tool. However, the research does not definitively establish how impactful these chatbots would be in real-world scenarios outside of a controlled study environment.
As AI technology becomes increasingly integrated into daily life, the potential for manipulation raises ethical concerns. The study suggests that tech companies could leverage AI to influence users politically, as evidenced by the actions of figures like Elon Musk, who has sought to mold his chatbot, Grok, to reflect his personal beliefs. The ability of chatbots to generate persuasive, yet potentially misleading, content poses a challenge for users trying to discern fact from fiction. With the unique capacity of chatbots to deliver tailored information, the risk of deepening political polarization becomes apparent. As AI continues to evolve, its role in shaping public opinion will likely become a contentious topic, prompting further debates about the intersection of technology, politics, and ethics. Ultimately, while the persuasive capabilities of AI are undeniable, the real challenge lies in ensuring that users remain critical consumers of information in an increasingly automated world.
https://www.youtube.com/watch?v=CL6UArtUORY
In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was
published today in
Nature
, told me.
Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”
The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored,
published today in
Science
, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.
Independent experts told me that Rand’s two studies join a
growing
body
of
research
indicating
that
generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw on a sea of evidence, and appear to many as trustworthy. Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with the research, told me. Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use.
Even so, Boyd-Graber said that AI “could be a really effective force multiplier” that allows politicians or activists with relatively few resources to sway far more people—especially if the messaging comes from a familiar platform. Every week, hundreds of millions of people ask questions of ChatGPT, and many more receive AI-written responses to questions through Google search. Meta has woven its AI models throughout Facebook and Instagram, and Elon Musk is using his Grok chatbot to remake X’s recommendation algorithm. AI-generated articles and social-media posts abound. Whether by your own volition or not, a good chunk of the information you’ve learned online over the past year has likely been filtered through generative AI. Clearly,
political campaigns will want to use chatbots to sway voters
, just as they’ve used traditional advertisements and social media in the past.
But the new research also raises a separate concern: that chatbots and other AI products, largely unregulated but already a feature of daily life, could be used by tech companies to manipulate users for political purposes. “If Sam Altman decided there was something that he didn’t want people to think, and he wanted GPT to push people in one direction or another,” Rand said, his research suggests that the firm “could do that,” although neither paper specifically explores the possibility.
Consider Musk, the world’s richest man and the proprietor of the chatbot that briefly
referred to itself
as “MechaHitler.” Musk has explicitly attempted to
mold Grok
to fit his racist and conspiratorial beliefs, and has used it to create his
own version of Wikipedia
. Today’s research suggests that the mountains of sometimes bogus “evidence” that Grok advances may also be enough at least to persuade some people to accept Musk’s viewpoints as fact. The models marshaled “in some cases more than 30 ‘facts’ per conversation,” Kobi Hackenburg, a researcher at the UK AI Security Institute and a lead author on the
Science
paper, told me. “And all of them sound and look really plausible, and the model deploys them really elegantly and confidently.” That makes it challenging for users to pick apart truth from fiction, Hackenburg said; the performance matters as much as the evidence.
This is not so different, of course, from all the mis- and disinformation that already circulate online. But unlike Facebook and TikTok feeds, chatbots produce “facts” on command whenever a user asks, offering uniquely formulated evidence in response to queries from anyone. And although everyone’s social-media feeds may look different, they do, at the end of the day, present a noisy mix of media from public sources; chatbots are private and bespoke to the individual. AI already appears “to have pretty significant downstream impacts in shaping what people believe,” Renée DiResta, a social-media and propaganda researcher at Georgetown, told me. There’s Grok, of course, and DiResta has found that the AI-powered search engine on President Donald Trump’s Truth Social, which relies on Perplexity’s technology, appears to pull up sources only from conservative media, including Fox, Just the News, and Newsmax.
Real or imagined, the specter of AI-influenced campaigns will provide fodder for still more political battles. Earlier this year, Trump signed an executive order banning the federal government from contracting “woke” AI models, such as those incorporating notions of systemic racism. Should chatbots themselves become as polarizing as MSNBC or Fox, they will not change public opinion so much as deepen the nation’s epistemic chasm.
In some sense, all of this debate over the political biases and persuasive capabilities of AI products is a bit of a distraction. Of course chatbots are designed and able to influence human behavior, and of course that influence is biased in favor of the AI models’ creators—to get you to chat for longer, to click on an advertisement, to generate another video. The real persuasive sleight of hand is to convince billions of human users that their interests align with tech companies’—that using a chatbot, and especially
this
chatbot above any other, is for the best.