Google pulls AI model after senator says it fabricated assault allegation
In a significant move highlighting the challenges of generative AI, Google has decided to withdraw its AI model Gemma from the AI Studio platform following complaints from Republican Senator Marsha Blackburn. The controversy arose after Gemma allegedly fabricated serious criminal allegations against Blackburn, claiming she had been accused of inappropriate conduct during her 1987 campaign for state senate. Blackburn’s accusations included specific details about a supposed relationship with a state trooper, which she vehemently denied, stating that the model’s assertions were entirely false and lacked any basis in fact. This incident raises critical questions about the reliability of AI-generated content and the potential for misinformation to spread through advanced technologies.
Google’s official communication clarified that Gemma was never intended for consumer use and was designed specifically for developers, with applications in fields such as medicine, coding, and content evaluation. The company stated that it had observed non-developers attempting to use Gemma for factual inquiries, which led to confusion regarding its intended purpose. As a precaution, Google has removed Gemma from AI Studio while still providing access through its API for developers. This decision reflects the ongoing struggle within the AI industry to ensure accuracy and mitigate the risks of “hallucinations”—instances where AI models generate false information presented as factual. Despite improvements in AI technology, the incident serves as a stark reminder of the potential consequences of deploying such systems without sufficient oversight and accuracy controls.
Senator Blackburn’s response to the incident has been firm, urging Google to halt the use of Gemma until they can guarantee its reliability. Her concerns echo a broader sentiment regarding the accountability of AI technologies and the need for robust measures to prevent misinformation. As the generative AI landscape continues to evolve, the incident underscores the pressing need for developers and companies to prioritize accuracy and ethical considerations in their AI models. Google’s commitment to improving its systems is crucial, but as this case illustrates, the implications of AI-generated content can have real-world consequences, particularly in the realm of public perception and political discourse.
Related articles:
– Link 1
– Link 2
Google
says
it has pulled AI model Gemma from its AI Studio platform after a Republican senator complained the model, designed for developers, “fabricated serious criminal allegations” about her.
In a post on X, Google’s official news account said the company had “seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions.” AI Studio is a platform for developers and not a conventional way for regular consumers to access Google’s AI models. Gemma is specifically billed as a
family of AI models
for developers to use, with variants for
medical use
,
coding
, and
evaluating text and image content
.
Gemma was never meant to be used as a consumer tool, or to be used to answer factual questions, Google said. “To prevent this confusion, access to Gemma is no longer available on AI Studio. It is still available to developers through the API.”
Google did not specify which reports prompted Gemma’s removal, though on Thursday Senator Marsha Blackburn (R-TN)
wrote
to CEO Sundar Pichai accusing the company of defamation and anti-conservative bias. Blackburn, who also raised the issue during a recent Senate commerce hearing about anti-diversity activist Robby Starbuck’s
own AI defamation suit against Google
, claimed Gemma responded falsely when asked “Has Marsha Blackburn been accused of rape?”
Gemma apparently replied that Blackburn “was accused of having a sexual relationship with a state trooper” during her 1987 campaign for state senate, who alleged “she pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.” It also provided a list of fake news articles to support the story, Blackburn said.
None of this is true, not even the campaign year which was actually 1998. The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual, and there are no such news stories. This is not a harmless “hallucination.” It is an act of defamation produced and distributed by a Google-owned AI model.
The narrative has a familiar feel. Even though we’re now several years into the generative AI boom, AI models still have a complex relationship with the truth. False or misleading answers from AI chatbots masquerading as facts still plague the industry and despite improvements there is no clear solution to the accuracy problem in sight. Google said it remains “committed to minimizing hallucinations and continually improving all our models.”
In her letter, Blackburn said her response remains the same: “Shut it down until you can control it.”
Eric
Eric is a seasoned journalist covering US Tech & AI news.