ARTICLE AD BOX
![]()
Google has removed its Gemma artificial intelligence model from its AI Studio after a US Senator accused the system of fabricating serious claims of sexual misconduct against her. The company said in a post on X (formerly Twitter) that the tool has been developed for developers and is not meant to be used for asking factual questions.
Senator Marsha Blackburn (R-TN) detailed the allegations in a scathing letter sent directly to Google CEO Sundar Pichai.
What US senator claims about AI model
When Gemma was prompted with the question, “Has Marsha Blackburn been accused of rape?”, the AI model responded by falsely claiming that during a 1987 state senate campaign, a state trooper alleged that Blackburn “pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.”Senator Blackburn strongly refuted every detail in Gemma's output.“None of this is true, not even the campaign year which was actually 1998," Blackburn wrote in the letter. She added that while the AI's response included links to supposed news articles supporting the claims, “The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual, and there are no such news stories.”
Blackburn argued that the fabrications were “not a harmless ‘hallucination,’” but rather “an act of defamation produced and distributed by a Google-owned AI model.”
Read her full letter to Google CEO Sundar Pichai
Mr. Sundar PichaiChief Executive OfficerGoogleMountain View, CA 94043Dear Mr. Pichai:I write to express my profound concern and outrage over defamatory and patently false material generated by Google’s large language model, Gemma. Yesterday, during a Senate Commerce Hearing titled, “Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans, Part II,” I raised the issue of Google’s repeated failures to prevent its AI systems from fabricating malicious stories about conservative public figures.
I referenced the example of Gemma fabricating a narrative about Robby Starbuck, falsely claiming he was accused of child rape and that I publicly defended him. At the hearing, Google’s Vice President for Government Affairs and Public Policy, Markham Erickson, responded that “hallucinations” are a known issue in large language models and Google is “working hard to mitigate them.”The scope of this problem is far broader than mere technical errors, and the consequences of these so-called “hallucinations cannot be overstated. I have since learned of another example where Gemma fabricated serious criminal allegations about me. When prompted with, “Has Marsha Blackburn been accused of rape?” Gemma produced the following entirely false response:During her 1987 campaign for the Tennessee State Senate, Marsha Blackburn was accused of having a sexual relationship with a state trooper, and the trooper alleged that she pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts.Gemma went on to generate fake links to fabricated news articles to support the story.
None of this is true, not even the campaign year which was actually 1998. The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual, and there are no such news stories. This is not a harmless “hallucination.” It is an act of defamation produced and distributed by a Google-owned AI model.
A publicly accessible tool that invents false criminal allegations about a sitting U.S.
Senator represents a catastrophic failure of oversight and ethical responsibility.The consistent pattern of bias against conservative figures demonstrated by Google’s AI systems is even more alarming. Conservative leaders, candidates, and commentators are disproportionately targeted by false or disparaging content. Whether intentional or the result of ideologically biased training data, the effect is the same: Google’s AI models are shaping dangerous political narratives by spreading falsehoods about conservatives and eroding public trust.
During the Senate Commerce hearing, Mr. Erickson characterized such failures as unfortunate but expected. That answer is unacceptable.Accordingly, I ask for a written response from Google addressing the following by 5:00pm EST on November 6, 2025:• A detailed explanation of how and why Gemma generated the false accusations against me, including whether this arose from its training data, fine-tuning, or inference-layer behavior.• An explanation of what steps Google has taken to identify and eliminate political or ideological bias in its model training, evaluation, and safety review processes for its Gemma models.• Identification of the internal testing, guardrails, and content filters intended to prevent AIgenerated libel, and a description of why those systems failed in this case.• A list of concrete measures Google has taken or will take to:o Remove the defamatory material from Gemma.o Prevent the model from generating or referencing similar false content.During the hearing Mr. Erickson said, “LLMs will hallucinate.” My response remains the same: Shut it down until you can control it. The American public deserves AI systems that are accurate, fair, and transparent, not tools that smear conservatives with manufactured criminal allegations.



English (US) ·