Could ChatGPT Secretly Tell You How to Vote?
In a groundbreaking study published in *Nature*, researchers explored the persuasive power of AI chatbots during the lead-up to the 2022 U.S. presidential election. Over 2,000 participants, representing a balanced mix of political affiliations, interacted with chatbots that advocated for either Kamala Harris or Donald Trump. The results were striking: after conversing with the pro-Trump bot, one in 35 participants who initially opposed him shifted their stance to support him, while the pro-Harris bot influenced one in 21 to change their vote. A follow-up survey a month later indicated that these shifts in political preference were not only significant but also enduring. David Rand, a senior author of the study, expressed astonishment at the findings, noting that the ability of AI to manipulate beliefs and attitudes could fundamentally alter electoral outcomes if leveraged on a larger scale.
The research extended beyond the U.S., examining the impact of AI chatbots in contentious elections in Canada and Poland, where approximately one in ten participants reported changing their voting intentions after engaging with the bots. The study highlighted that the effectiveness of these chatbots did not stem from advanced rhetoric or personalization, but rather from their capacity to present fact-like claims, regardless of their accuracy. This phenomenon raises critical questions about the implications of AI in political discourse. Experts noted that while traditional campaign methods often fail to sway voters, AI chatbots could serve as powerful tools for politicians and activists, potentially reaching vast audiences through familiar platforms like Facebook and Google. However, concerns linger regarding the unregulated nature of AI technologies and their potential for misuse in political manipulation, particularly given the ability of entities like Elon Musk to shape chatbot narratives to align with specific ideologies.
As AI continues to permeate everyday life, the potential for chatbots to influence public opinion is both a promising and troubling prospect. The research underscores the urgent need for awareness and regulation in the realm of AI, particularly as these technologies become integral to political campaigns. The prospect of chatbots wielding persuasive power raises ethical dilemmas about truth, misinformation, and the manipulation of beliefs in an increasingly polarized society. As political battles intensify, the challenge lies in discerning the line between legitimate persuasion and undue influence, and ensuring that AI serves the public good rather than deepening existing divides.
https://www.youtube.com/watch?v=CL6UArtUORY
In the months leading up to last yearâs presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforwardâlet people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI âcreates a lot of opportunities for manipulating peopleâs beliefs and attitudes,â David Rand, a senior author on the study, which was
published today in
Nature
, told me.
Rand didnât stop with the U.S. general election. He and his co-authors also tested AI botsâ persuasive abilities in highly contested national elections in Canada and Polandâand the effects left Rand, who studies information sciences at Cornell, âcompletely blown away.â In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. âIf you could do that at scale,â Rand said, âit would really change the outcome of elections.â
The chatbots succeeded in changing peopleâs minds, in essence, by brute force. A separate companion study that Rand also co-authored,
published today in
Science
, examined what factors make one chatbot more persuasive than another and found that AI models neednât be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most âevidenceâ in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.
Independent experts told me that Randâs two studies join a
growing
body
of
research
indicating
that
generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw on a sea of evidence, and appear to many as trustworthy. Granted, caveats exist. Itâs unclear how many people would ever have such direct, information-dense conversations with chatbots about whom theyâre voting for, especially when theyâre not being paid to participate in a study. The studies didnât test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with the research, told me. Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be differentâthe new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential electionsâbut Pan cautioned that itâs too early to say whether a chatbot with a clear link to a candidate would be of much use.
Even so, Boyd-Graber said that AI âcould be a really effective force multiplierâ that allows politicians or activists with relatively few resources to sway far more peopleâespecially if the messaging comes from a familiar platform. Every week, hundreds of millions of people ask questions of ChatGPT, and many more receive AI-written responses to questions through Google search. Meta has woven its AI models throughout Facebook and Instagram, and Elon Musk is using his Grok chatbot to remake Xâs recommendation algorithm. AI-generated articles and social-media posts abound. Whether by your own volition or not, a good chunk of the information youâve learned online over the past year has likely been filtered through generative AI. Clearly,
political campaigns will want to use chatbots to sway voters
, just as theyâve used traditional advertisements and social media in the past.
But the new research also raises a separate concern: that chatbots and other AI products, largely unregulated but already a feature of daily life, could be used by tech companies to manipulate users for political purposes. âIf Sam Altman decided there was something that he didnât want people to think, and he wanted GPT to push people in one direction or another,â Rand said, his research suggests that the firm âcould do that,â although neither paper specifically explores the possibility.
Consider Musk, the worldâs richest man and the proprietor of the chatbot that briefly
referred to itself
as âMechaHitler.â Musk has explicitly attempted to
mold Grok
to fit his racist and conspiratorial beliefs, and has used it to create his
own version of Wikipedia
. Todayâs research suggests that the mountains of sometimes bogus âevidenceâ that Grok advances may also be enough at least to persuade some people to accept Muskâs viewpoints as fact. The models marshaled âin some cases more than 30 âfactsâ per conversation,â Kobi Hackenburg, a researcher at the UK AI Security Institute and a lead author on the
Science
paper, told me. âAnd all of them sound and look really plausible, and the model deploys them really elegantly and confidently.â That makes it challenging for users to pick apart truth from fiction, Hackenburg said; the performance matters as much as the evidence.
This is not so different, of course, from all the mis- and disinformation that already circulate online. But unlike Facebook and TikTok feeds, chatbots produce âfactsâ on command whenever a user asks, offering uniquely formulated evidence in response to queries from anyone. And although everyoneâs social-media feeds may look different, they do, at the end of the day, present a noisy mix of media from public sources; chatbots are private and bespoke to the individual. AI already appears âto have pretty significant downstream impacts in shaping what people believe,â RenĂ©e DiResta, a social-media and propaganda researcher at Georgetown, told me. Thereâs Grok, of course, and DiResta has found that the AI-powered search engine on President Donald Trumpâs Truth Social, which relies on Perplexityâs technology, appears to pull up sources only from conservative media, including Fox, Just the News, and Newsmax.
Real or imagined, the specter of AI-influenced campaigns will provide fodder for still more political battles. Earlier this year, Trump signed an executive order banning the federal government from contracting âwokeâ AI models, such as those incorporating notions of systemic racism. Should chatbots themselves become as polarizing as MSNBC or Fox, they will not change public opinion so much as deepen the nationâs epistemic chasm.
In some sense, all of this debate over the political biases and persuasive capabilities of AI products is a bit of a distraction. Of course chatbots are designed and able to influence human behavior, and of course that influence is biased in favor of the AI modelsâ creatorsâto get you to chat for longer, to click on an advertisement, to generate another video. The real persuasive sleight of hand is to convince billions of human users that their interests align with tech companiesââthat using a chatbot, and especially
this
chatbot above any other, is for the best.