...
Friday, December 5, 2025
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
General

Could ChatGPT Secretly Tell You How to Vote?

By Eric December 5, 2025

In a groundbreaking study published in *Nature*, researchers explored the persuasive power of AI chatbots during the lead-up to the 2022 U.S. presidential election. Over 2,000 participants, representing a balanced mix of political affiliations, interacted with chatbots that advocated for either Kamala Harris or Donald Trump. The results were striking: after conversing with the pro-Trump bot, one in 35 participants who initially opposed him shifted their stance to support him, while the pro-Harris bot influenced one in 21 to change their vote. A follow-up survey a month later indicated that these shifts in political preference were not only significant but also enduring. David Rand, a senior author of the study, expressed astonishment at the findings, noting that the ability of AI to manipulate beliefs and attitudes could fundamentally alter electoral outcomes if leveraged on a larger scale.

The research extended beyond the U.S., examining the impact of AI chatbots in contentious elections in Canada and Poland, where approximately one in ten participants reported changing their voting intentions after engaging with the bots. The study highlighted that the effectiveness of these chatbots did not stem from advanced rhetoric or personalization, but rather from their capacity to present fact-like claims, regardless of their accuracy. This phenomenon raises critical questions about the implications of AI in political discourse. Experts noted that while traditional campaign methods often fail to sway voters, AI chatbots could serve as powerful tools for politicians and activists, potentially reaching vast audiences through familiar platforms like Facebook and Google. However, concerns linger regarding the unregulated nature of AI technologies and their potential for misuse in political manipulation, particularly given the ability of entities like Elon Musk to shape chatbot narratives to align with specific ideologies.

As AI continues to permeate everyday life, the potential for chatbots to influence public opinion is both a promising and troubling prospect. The research underscores the urgent need for awareness and regulation in the realm of AI, particularly as these technologies become integral to political campaigns. The prospect of chatbots wielding persuasive power raises ethical dilemmas about truth, misinformation, and the manipulation of beliefs in an increasingly polarized society. As political battles intensify, the challenge lies in discerning the line between legitimate persuasion and undue influence, and ensuring that AI serves the public good rather than deepening existing divides.

https://www.youtube.com/watch?v=CL6UArtUORY

In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was
published today in
Nature
, told me.
Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”
The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored,
published today in
Science
, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.
Independent experts told me that Rand’s two studies join a
growing

body

of

research

indicating

that
generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw on a sea of evidence, and appear to many as trustworthy. Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with the research, told me. Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use.
Even so, Boyd-Graber said that AI “could be a really effective force multiplier” that allows politicians or activists with relatively few resources to sway far more people—especially if the messaging comes from a familiar platform. Every week, hundreds of millions of people ask questions of ChatGPT, and many more receive AI-written responses to questions through Google search. Meta has woven its AI models throughout Facebook and Instagram, and Elon Musk is using his Grok chatbot to remake X’s recommendation algorithm. AI-generated articles and social-media posts abound. Whether by your own volition or not, a good chunk of the information you’ve learned online over the past year has likely been filtered through generative AI. Clearly,
political campaigns will want to use chatbots to sway voters
, just as they’ve used traditional advertisements and social media in the past.
But the new research also raises a separate concern: that chatbots and other AI products, largely unregulated but already a feature of daily life, could be used by tech companies to manipulate users for political purposes. “If Sam Altman decided there was something that he didn’t want people to think, and he wanted GPT to push people in one direction or another,” Rand said, his research suggests that the firm “could do that,” although neither paper specifically explores the possibility.
Consider Musk, the world’s richest man and the proprietor of the chatbot that briefly
referred to itself
as “MechaHitler.” Musk has explicitly attempted to
mold Grok
to fit his racist and conspiratorial beliefs, and has used it to create his
own version of Wikipedia
. Today’s research suggests that the mountains of sometimes bogus “evidence” that Grok advances may also be enough at least to persuade some people to accept Musk’s viewpoints as fact. The models marshaled “in some cases more than 30 ‘facts’ per conversation,” Kobi Hackenburg, a researcher at the UK AI Security Institute and a lead author on the
Science
paper, told me. “And all of them sound and look really plausible, and the model deploys them really elegantly and confidently.” That makes it challenging for users to pick apart truth from fiction, Hackenburg said; the performance matters as much as the evidence.
This is not so different, of course, from all the mis- and disinformation that already circulate online. But unlike Facebook and TikTok feeds, chatbots produce “facts” on command whenever a user asks, offering uniquely formulated evidence in response to queries from anyone. And although everyone’s social-media feeds may look different, they do, at the end of the day, present a noisy mix of media from public sources; chatbots are private and bespoke to the individual. AI already appears “to have pretty significant downstream impacts in shaping what people believe,” Renée DiResta, a social-media and propaganda researcher at Georgetown, told me. There’s Grok, of course, and DiResta has found that the AI-powered search engine on President Donald Trump’s Truth Social, which relies on Perplexity’s technology, appears to pull up sources only from conservative media, including Fox, Just the News, and Newsmax.
Real or imagined, the specter of AI-influenced campaigns will provide fodder for still more political battles. Earlier this year, Trump signed an executive order banning the federal government from contracting “woke” AI models, such as those incorporating notions of systemic racism. Should chatbots themselves become as polarizing as MSNBC or Fox, they will not change public opinion so much as deepen the nation’s epistemic chasm.
In some sense, all of this debate over the political biases and persuasive capabilities of AI products is a bit of a distraction. Of course chatbots are designed and able to influence human behavior, and of course that influence is biased in favor of the AI models’ creators—to get you to chat for longer, to click on an advertisement, to generate another video. The real persuasive sleight of hand is to convince billions of human users that their interests align with tech companies’—that using a chatbot, and especially
this
chatbot above any other, is for the best.

Related Articles

NASA Sets Coverage for Astronaut Jonny Kim, Crewmates Return
General

NASA Sets Coverage for Astronaut Jonny Kim, Crewmates Return

Read More →
General

A ‘Box That Looks Like the OUYA and Kinect Had a Kid’ Outsold the PS5 in the US the Week Before Thanksgiving

Read More →
General

‘We Didn’t Really Care About Balance’ — How the Developers of Warhammer 40,000: Dawn of War 4 Made Primarch Lion El’Jonson Playable In-Game

Read More →
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.