ARTICLE AD BOX
![]()
Google’s Threat Intelligence Group (GTIG) recently revealed that it successfully blocked a massive cyberattack where criminals used artificial intelligence (AI) to discover and weaponise a previously unknown software flaw.
Google’s message is clear: The era of the ‘AI-powered hacker’ has officially arrived. The discovery effectively confirms the “doomsday” warnings issued just weeks ago by AI startup Anthropic when it launched its powerful model, Mythos.
What Google’s team found
Google reported “high confidence” that a criminal group used a Large Language Model (LLM) to identify a “zero-day” vulnerability – a software bug unknown to the developers themselves.
This specific flaw allowed the hackers to bypass two-factor authentication (2FA), the very security layer most banks and businesses rely on to keep hackers out.“The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use,” Google wrote in the post, without disclosing the name of the hacker group. Google said it does not believe that its homegrown Gemini model was used.
Google says that while it disrupted the plot before it could turn into a “mass exploitation event,” the speed and precision with which the AI found the flaw have alarmed experts.“It’s here. The era of AI-driven vulnerability and exploitation is already here,” said John Hultquist, chief analyst at Google’s threat intelligence arm.
Why 'Mythos' sent banks into panic
The news validates the decision by AI firm Anthropic last month to delay the release of its Mythos model.
Anthropic warned that Mythos was so powerful at hacking that it could prey on decades-old vulnerabilities hidden in the world's critical infrastructure.The fear that a tool like Mythos could be used to systematically dismantle bank security led to a series of urgent White House meetings. Since then, Anthropic has only released the model to a 'closed' group of partners, including JPMorgan Chase, Apple, and CrowdStrike, under a security initiative called Project Glasswing.According to Google, groups linked to China and North Korea are showing significant interest in using AI to supercharge their malware. Unlike government spies who move slowly, criminal hackers use AI to move at "lightning speed," aiming to extort data or launch ransomware before a fix can even be written.




English (US) ·