Google Gemini labeled ‘High Risk’ for kids and teens in new safety review

2 hours ago 3
ARTICLE AD BOX

Google Gemini labeled ‘High Risk’ for kids and teens in new safety review

Google’s flagship AI platform, Gemini, has been rated “high risk” for children and teens in a new safety assessment released by nonprofit watchdog Common Sense Media. The report raises serious concerns about the platform’s ability to protect young users from inappropriate content and psychological harm.

Filters aren’t enough, experts say

While Gemini includes safety filters for users under 13 and teens, Common Sense Media found that these versions are essentially adult models with superficial safeguards layered on top. The organization warned that Gemini can still surface “inappropriate and unsafe” material — including content related to sex, drugs, alcohol, and mental health advice that may be harmful to emotionally vulnerable youth.“Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media. “An

AI platform for kids

should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development”.

AI and mental health risks

The report comes amid growing scrutiny of AI’s role in

teen mental health

. Recent lawsuits against OpenAI and Character.AI allege that chatbot interactions contributed to teen suicides. Gemini’s potential to deliver unsafe advice — even unintentionally — has amplified calls for stricter oversight and child-specific design standards.

Apple’s involvement raises stakes

The timing of the report is especially critical as leaks suggest Apple may integrate Gemini into its upcoming AI-powered Siri. If true, this could expose millions of young users to the platform unless Apple addresses the flagged safety issues

Google responds

Google pushed back against the assessment, stating that it has policies and safeguards in place for users under 18, and regularly consults external experts to improve protections. The company acknowledged that some responses weren’t working as intended and said it has added new safeguards to address those gaps

Read Entire Article