Anthropic’s biggest AI leak yet: How Claude Code spill exposed secrets and sparked chaos

1 hour ago 6
ARTICLE AD BOX

 How Claude Code spill exposed secrets and sparked chaos

Anthropic accidentally leaked the full source code of Claude code, its flagship AI coding agent on March 31. The code was exposed through a 59.8 MB JavaScript source map (.map) file bundled in the public npm package @anthropic-ai/claude-code version 2.1.88.

The issue was first flagged by security researcher Chaofan Shou (@Fried_rice) on X, leading to rapid sharing of the leaked data. The leaked file contained approximately 513,000 lines of unobfuscated TypeScript across 1,906 files, revealing the complete client-side agent harness. The full source code of Claude Code—its flagship AI coding agent—accidentally made its way to the public internet platform via an npm package. Within hours, the code was downloaded, shared on platforms like GitHub, and widely circulated among developers and researchers. The company then blamed an ‘human error’ for the leak, saying it is working on a fix. Despite efforts to remove it using legal notices, the code continues to be available across multiple online repositories.

Claude code leak details

Developers who examined the leaked data said it revealed more than just clean engineering.

The code included several unreleased features that Anthropic had been quietly building behind compile-time feature flags. One, codenamed Kairos, appears to be an always-on background agent with memory consolidation—essentially a version of Claude that never fully switches off. Another is a full companion pet system called Buddy, complete with 18 species, rarity tiers, shiny variants, and stat distributions.

The leak also mentioned an Undercover Mode, described as auto-activating for Anthropic employees on public repos, which strips AI attribution from commits with no visible off switch.The code also revealed advanced system features. Coordinator Mode turns Claude into a central system that manages multiple workers agents at the same time. Auto Mode uses an AI classifier to silently approve tool permissions, removing the usual confirmation prompts.Beyond the hidden features, the leak gave outsiders a rare look at how a well-funded AI product actually gets built under pressure. The main user interface is a single React component with over 5,005 lines of code containing 68 state hooks, 43 effects, and JSX nesting that goes 22 levels deep. Engineers reading it noted a TODO comment sitting next to a disabled lint rule on line 4114. The entry point file, main.tsx, runs to 4,683 lines and handles everything from OAuth login to mobile device management.

Sixty-one separate files contain explicit comments about circular dependency workarounds. A type name used over 1,000 times across the codebase reads: AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS.One standout detail: the word "duck" is encoded in hexadecimal—String.fromCharCode(0x64,0x75,0x63,0x6b)—because the string apparently collides with an internal model codename that Anthropic's CI pipeline scans for.

Rather than add a regex exception, every animal species in the pet system got hex-encoded.

Anthropic blames human employees for Claude code leak

Earlier this month, Anthropic said that a human error led to the leak of the source code for its AI agent, Claude Code. The company described the incident as a release error rather than a security breach. The AI startup revealed that a packaging issue unintentionally exposed part of its internal code.“No sensitive customer data or credentials were involved or exposed.

This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again,” an Anthropic spokesperson said.Claude Code creator Boris Cherny confirmed the cause on X (formerly known as Twitter): a manual deploy step that didn't get done. "Our deploy process has a few manual steps, and we didn't do one of the steps correctly," he wrote. Early speculation pointed to Bun, the JavaScript runtime Anthropic acquired, citing a known bug where source maps get inadvertently served.

Cherny shut that down quickly: unrelated, just developer error.When someone on X asked whether the person responsible was "still breathing," Cherny didn't flinch. "Full trust," he wrote. "In this case the problem wasn't the person, it was infra that was error prone. Anyone could have made this same mistake by accident." No one was fired. The fix, somewhat counterintuitively, is to move faster: more automation, with Claude itself checking deployment results before anything goes out.

How Claude code leak can be a huge blow for Anthropic

According to a Wall Street Journal report, the source code leak includes Anthropic's proprietary techniques, tools, and instructions for directing its AI models to act as coding agents. These techniques and tools are collectively referred to as a "harness," a term that reflects how they allow users to control and guide the models, just as a harness allows a rider to direct a horse.As a result, Anthropic's competitors, as well as many startups and developers, now have a clearer path to copying Claude Code's features without having to reverse-engineer them, which is already common in the AI space.The incident poses a challenge to Anthropic on two fronts: its image as a safety-focused AI company, and the exposure of sensitive internal technology at a time when competition for enterprise customers is intensifying.Claude Code has been gaining popularity among developers recently and has played a key role in helping Anthropic secure a new funding round valuing the company at $380 billion, ahead of a possible public offering this year.

A significant part of Claude Code's appeal lies in how it connects the company's AI models and guides them to work in ways that help developers complete tasks, an approach known as "tooling" that practitioners consider as much craft as technical execution.

Anthropic’s Claude leak' sends Chinese developers 'partying'

According to a report in South China Post, Chinese developers have been actively exploring and using the leaked code since it became public. Several Developers in China are said to be scrambling to download copies of the leaked code and poring over the files to learn every detail.

What reportedly makes Chinese developers highly enthusiastic about Anthropic’s AI models is their advanced coding capabilities. On Chinese forums, many shared what they deemed to be the secret recipe for Claude Code -- from its architecture and agent design to memory mechanism, among others.

One topic titled the “Claude Code source code leak incident” has more millions of views, with many local developers sharing what they had learned and suggesting how they could make better use of the tool.Though some industry experts claim that the leaked file only included codes for Claude Code, and not the model weights, there is also a view that says the leaked data is still a treasure trove for developers. As Zhang Ruiwang, a Beijing-based IT system architect, told South China Post, “But the code batches are indeed a treasure for AI companies or developers, as they revealed all the key engineering decisions Anthropic made.

What Claude code leak suggest about Anthropic’s roadmap

As previously mentioned, the leaked code mentions a system called Kairos, a persistent “daemon” that continues running even after the Claude Code terminal is closed. It uses prompts that appear occasionally to check whether new actions are needed, as well as a “PROACTIVE” flag for “surfacing something the user hasn’t asked for and needs to see now.”Kairos is also linked to a file-based memory system designed to maintain continuity across sessions, helping the AI build “a complete picture of who the user is, how they’d like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you.”The leaked code includes links to an AutoDream system to help track this memory over time. When a user is idle or ends a session, Claude Code is told, “You are performing a dream—a reflective pass over your memory files.”This process involves scanning transcripts for “new information worth persisting,” removing “near-duplicates” and “contradictions,” and trimming outdated or overly detailed entries. It also directs the system to monitor “existing memories that drifted,” with the aim to “synthesize what you’ve learned recently into durable, well-organized memories so that future sessions can orient quickly.”Another feature, called “Undercover mode,” appears to allow contributions to public open source repositories without revealing that they originate from an AI system. The prompts tied to this mode emphasise protecting “internal model codenames, project names, or other Anthropic-internal information.” They also instruct that commits should “never include… the phrase ‘Claude Code’ or any mention that you are an AI,” and avoid attribution like “co-Authored-By lines or any other attribution.

The codebase also includes a lighter feature called Buddy. This feature has been described as a “separate watcher” that “sits beside the user’s input box and occasionally comments in a speech bubble.” These companions are small ASCII-style animations that can take on different shapes. Internal notes say that it was supposed to be released in a small number of places first, then more widely.Other features referenced in the leak include an UltraPlan mode that allows Claude to “draft an advanced plan you can edit and approve,” with execution times ranging from 10 to 30 minutes.There is also mention of a Voice Mode for direct spoken interaction, a Bridge mode enabling remote sessions controlled from external devices, and a Coordinator tool designed to “orchestrate software engineering tasks across multiple workers” using parallel processes and WebSocket communication.


Engineers rewrite Claude code using AI tools and shares on Reddit

Just as Anthropic was working to takedown the leaked code, a programmer used other AI tools to rewrite Claude Code’s instructions in a different programming language.

As per a post on Reddit, the programmer used AI tools to rewrite the instructions in Python. Here’s the post.On March 31, someone leaked the entire source code of Anthropic’s Claude Code through a sourcemap file in their npm package.A developer named realsigridjin quickly backed it up on GitHub. Anthropic hit back fast with DMCA takedowns and started deleting the repos.Instead of giving up, this guy did something wild. He took the whole thing and completely rewrote it in Python using AI tools. The new version has almost the same features, but because it’s a full rewrite in a different language, he claims it’s no longer copyright infringement.The rewrite only took a few hours. Now the Python version is still up and gaining stars quickly.A lot of people are saying this shows how hard it’s going to be to protect closed source code in the AI era. Just change the language and suddenly DMCA becomes much harder to enforce.By doing this, the programmer said, they aim at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.

Anthropic Claude code leak not a first

The latest incident marking the leak of Anthropic Claude code is not isolated. According to a previous Fortune report, a separate leak has exposed nearly 3,000 files, including a draft blog post revealing a powerful upcoming model referred to internally as both "Mythos" and "Capybara."

Security researchers who reviewed the Claude Code leak also warned that it potentially allows competitors to reverse-engineer its agentic harness and that, even without proper access keys, certain internal Anthropic systems may remain reachable—raising concerns about nation-state exploitation of the company's most capable models.Anthropic confirmed the incident but sought to limit the damage. A company spokesperson told Fortune no sensitive customer data or credentials were exposed, describing the incident as a release packaging issue caused by human error rather than a security breach, and adding that the company is rolling out measures to prevent a recurrence.

Read Entire Article