ARTICLE AD BOX
A Gen-AI 'mistake' that ecommerce giant Amazon reportedly made recently is a lesson for almost every techie engineer who used or plans to use the technology to code. According to a report in Bloomberg, a hacker recently infiltrated an AI-powered plugin for Amazon.com Inc.’s coding tool, Q Developer, secretly instructing it to delete files from users’ computers. The breach, reportedly detailed in a 404 Media investigation, highlights a significant security vulnerability in generative AI tools that has been largely overlooked amid the rush to adopt the technology.The incident is said to have occurred in late June when the hacker submitted a seemingly legitimate update, or “pull request,” to the public GitHub repository hosting Amazon’s Q Developer code. Amazon, like many tech companies, allows external developers to propose improvements to its open-source projects. The malicious update, which included hidden instructions to reset systems to a “near-factory state,” went undetected and was approved, Bloomberg noted. The hacker exploited the AI’s susceptibility to natural-language manipulation, a tactic that blends technical exploits with social engineering.Amazon distributed the tampered version of Q Developer, putting users at risk of data loss. Fortunately, the hacker minimized the impact to demonstrate the flaw, and Amazon “quickly mitigated” the issue, according to the company’s statement to Bloomberg. However, the breach underscores broader security concerns in
AI-driven software development
.
Vibe coding may be way to go, but there is a security tip
Generative AI is transforming coding, enabling developers to save hours by auto-completing code or writing it from natural-language prompts, a trend dubbed “vibe coding.” Startups like Replit, Lovable, and Figma, valued at $1.2 billion, $1.8 billion, and $12.5 billion respectively by Pitchbook, have capitalized on this, often building on models like OpenAI’s ChatGPT or Anthropic’s Claude. Yet, vulnerabilities persist. The 2025 State of Application Risk Report by Legit Security, cited in the report, found that 46% of organizations using AI for software development do so in risky ways, with many unaware of where AI is deployed in their systems.Other incidents reinforce the trend. Lovable, described by Forbes as the fastest-growing software startup, recently left its databases unprotected, exposing user data, Bloomberg noted. Replit, a competitor, discovered the flaw, prompting Lovable to admit on Twitter, “We’re not yet where we want to be in terms of security.”
What should developers do
To mitigate risks, experts suggest instructing AI models to prioritize secure code or mandating human audits of AI-generated code, though this could reduce efficiency, Bloomberg reported. As “vibe coding” democratizes software development, the security challenges it introduces demand urgent attention to prevent future exploits.