Google pulls AI model after senator says it fabricated assault allegation
In a significant move, Google has decided to withdraw its AI model, Gemma, from the AI Studio platform following a complaint from Republican Senator Marsha Blackburn. The senator accused the model of fabricating serious criminal allegations against her, highlighting the ongoing concerns about the reliability and ethical implications of AI-generated content. Gemma, designed primarily for developers with applications in medical, coding, and content evaluation fields, was never intended for consumer use or to answer factual questions. In response to the misuse of Gemma by non-developers, Google announced that access to the model on AI Studio would be terminated, although it remains available to developers through an API.
The controversy erupted when Blackburn alleged that Gemma provided a false response to the question, “Has Marsha Blackburn been accused of rape?” The AI model reportedly claimed that Blackburn had been accused of engaging in a non-consensual relationship with a state trooper during her 1998 campaign for state senate, complete with fabricated supporting articles. Blackburn vehemently denied these claims, stating that they were not only false but also defamatory, as there had never been any such accusations against her. The links provided by Gemma led to error pages, further illustrating the model’s failures in generating accurate and reliable information. This incident underscores a troubling trend in the AI industry, where generative models often produce misleading or entirely false information—referred to as “hallucinations”—despite ongoing improvements in technology.
In her communication to Google CEO Sundar Pichai, Blackburn insisted on the urgent need for accountability, urging the company to halt the deployment of such models until they can ensure their reliability and prevent the spread of misinformation. This situation reflects a broader societal concern regarding the ethical use of AI and the potential for harm when such technologies are not adequately regulated or controlled. As the AI landscape continues to evolve, the challenge of ensuring factual accuracy and mitigating bias remains a critical issue for developers and users alike. Google has stated its commitment to addressing these challenges, emphasizing the importance of minimizing inaccuracies in AI outputs. However, incidents like this serve as a stark reminder of the potential consequences of unchecked AI capabilities in the public sphere.
Related articles:
– Link 1
– Link 2
Google
says
it has pulled AI model Gemma from its AI Studio platform after a Republican senator complained the model, designed for developers, “fabricated serious criminal allegations” about her.
In a post on X, Google’s official news account said the company had “seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions.” AI Studio is a platform for developers and not a conventional way for regular consumers to access Google’s AI models. Gemma is specifically billed as a
family of AI models
for developers to use, with variants for
medical use
,
coding
, and
evaluating text and image content
.
Gemma was never meant to be used as a consumer tool, or to be used to answer factual questions, Google said. “To prevent this confusion, access to Gemma is no longer available on AI Studio. It is still available to developers through the API.”
Google did not specify which reports prompted Gemma’s removal, though on Thursday Senator Marsha Blackburn (R-TN)
wrote
to CEO Sundar Pichai accusing the company of defamation and anti-conservative bias. Blackburn, who also raised the issue during a recent Senate commerce hearing about anti-diversity activist Robby Starbuck’s
own AI defamation suit against Google
, claimed Gemma responded falsely when asked “Has Marsha Blackburn been accused of rape?”
Gemma apparently replied that Blackburn “was accused of having a sexual relationship with a state trooper” during her 1987 campaign for state senate, who alleged “she pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.” It also provided a list of fake news articles to support the story, Blackburn said.
None of this is true, not even the campaign year which was actually 1998. The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual, and there are no such news stories. This is not a harmless “hallucination.” It is an act of defamation produced and distributed by a Google-owned AI model.
The narrative has a familiar feel. Even though we’re now several years into the generative AI boom, AI models still have a complex relationship with the truth. False or misleading answers from AI chatbots masquerading as facts still plague the industry and despite improvements there is no clear solution to the accuracy problem in sight. Google said it remains “committed to minimizing hallucinations and continually improving all our models.”
In her letter, Blackburn said her response remains the same: “Shut it down until you can control it.”
Eric
Eric is a seasoned journalist covering US Tech & AI news.