Anthropic's Claude Code leak exposes Tamagotchi pet and always-on agent features
A source code leak in Anthropic's Claude Code 2.1.88 update exposed more than 512,000 lines of TypeScript, revealing unreleased features including a Tamagotchi-like pet interface and a KAIROS feature for background agent automation. Anthropic confirmed the leak was caused by a packaging error, not a security breach, and has since fixed the issue.
Anthropic's Claude Code Leak Exposes 512,000 Lines Including Unreleased Features
Anthropic's Claude Code 2.1.88 update accidentally included a source map file containing its full TypeScript codebase—more than 512,000 lines of code—which users quickly discovered and copied to public GitHub repositories. The leak has since amassed over 50,000 forks.
What the Leak Revealed
Users analyzing the code have identified several unreleased features:
Tamagotchi-style pet interface: A virtual pet that "sits beside your input box and reacts to your coding," according to Reddit users who reviewed the code.
KAIROS feature: Described as an "always-on background agent" capable of executing tasks autonomously on a user's behalf without explicit prompts.
Internal architecture details: The leak exposed Anthropic's memory architecture, AI bot instructions, and comments from developers. One engineer's note acknowledged that "memoization here increases complexity by a lot, and im not sure it really improves performance."
Anthropic's Response
Anthropric spokesperson Christopher Nulty stated: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
The company fixed the issue shortly after discovery, but the code had already been mirrored publicly.
Security and Operational Impact
Arun Chandrasekhar, an AI analyst at Gartner, told The Verge that while the leak poses "risks such as providing bad actors with possible outlets to bypass guardrails," its long-term impact may be limited. He framed it as "a call for action for Anthropic to invest more in processes and tools for better operational maturity."
The leak does not appear to include API keys, credentials, or customer data—distinguishing it from a full security breach. However, it provides potential attackers with detailed knowledge of Claude Code's internals, including security controls and implementation details.
Context: Claude Code's Evolution
Anthropric launched Claude Code in February 2025 as an AI-powered coding assistant. The tool gained significant traction after Anthropic added agentic capabilities, allowing it to perform tasks autonomously. The company also released Cowork, a platform that integrates Claude Code with computer control capabilities.
What This Means
The leak accelerates public visibility of Anthropic's upcoming features by months, potentially influencing competitor roadmaps and user expectations. The Tamagotchi pet and KAIROS agent features suggest Anthropic is moving toward more interactive and autonomous coding assistance. However, the incident highlights operational vulnerabilities in deployment processes at major AI labs—a concern that extends beyond Anthropic to the entire industry. Organizations handling sensitive AI development will likely scrutinize their build and release pipelines more carefully.
Related Articles
Microsoft expands Copilot Cowork with AI model critique feature and cross-model comparison
Microsoft is expanding Copilot Cowork availability and introducing a Critique function that enables one AI model to review another's output. The update also includes a new Researcher agent claiming best-in-class deep research performance, outperforming Perplexity by 7 points, and a Model Council feature for direct model comparison.
Amazon Bedrock AgentCore Evaluations now generally available for testing AI agents
Amazon Bedrock AgentCore Evaluations, a fully managed service for assessing AI agent performance, is now generally available following its public preview debut at AWS re:Invent 2025. The service addresses the core challenge that LLMs are non-deterministic—the same user query can produce different tool selections and outputs across runs—making traditional single-pass testing inadequate for reliable agent deployment.
OpenAI embeds Codex plugin directly into Anthropic's Claude Code
OpenAI released a plugin that embeds its Codex coding assistant directly into Anthropic's Claude Code, the market-dominant code IDE. The plugin offers standard code review, adversarial review, and background task handoff capabilities, requiring only a ChatGPT subscription or OpenAI API key.
Microsoft Copilot Researcher adds multi-model features using GPT and Claude
Microsoft has enabled its Copilot Researcher tool to simultaneously leverage OpenAI's GPT and Anthropic's Claude through two new features: Critique, which uses GPT responses refined by Claude, and Model Council, which displays side-by-side outputs with agreement/disagreement analysis. Both features are rolling out in the Microsoft 365 Copilot Frontier early access program.
Comments
Loading...