product updateAnthropic

Anthropic identifies three bugs causing Claude Code quality degradation over two months

TL;DR

Anthropic confirmed that widespread complaints about Claude Code quality degradation were caused by three separate bugs in the coding assistant's harness, not the underlying models. One critical bug caused Claude to clear its thinking context every turn in sessions that had been idle for over an hour, making it appear forgetful and repetitive.

2 min read
0

Anthropic identifies three bugs causing Claude Code quality degradation over two months

Anthropic has published a postmortem confirming that user complaints about Claude Code quality issues over the past two months were caused by bugs in the coding assistant's infrastructure, not the underlying AI models.

According to the company's analysis, three separate issues in the Claude Code harness created "complex but material problems which directly affected users."

Critical session bug affected idle workflows

The most significant bug stemmed from a March 26 update intended to improve performance. According to Anthropic: "We shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive."

This bug particularly impacted developers who maintain long-running sessions. Simon Willison, who flagged the postmortem, noted he frequently leaves Claude Code sessions idle for an hour or longer and currently maintains 11 such sessions simultaneously. He estimates spending more time in these "stale" sessions than in recently started ones.

Harness complexity revealed

The incident highlights the technical challenges of building reliable AI coding assistants beyond model performance. All three identified bugs occurred in the harness layer that connects users to the underlying Claude models, rather than in the models themselves.

Anthropic has not disclosed specific details about the other two bugs or when fixes were deployed. The company also has not provided information about compensation or service credits for affected users during the two-month period.

What this means

This case demonstrates that AI coding assistant quality depends on more than model capabilities alone. The infrastructure managing context, session state, and user interactions introduces its own failure modes that can significantly degrade user experience. For teams building agentic systems, these "harness bugs" represent a distinct category of technical debt separate from model improvements. The fact that these issues persisted for two months before public acknowledgment also raises questions about monitoring and quality assurance processes for production AI systems.

Related Articles

changelog

Anthropic Python SDK v0.97.0 Adds CMA Memory Feature in Public Beta

Anthropic has released version 0.97.0 of its Python SDK, introducing CMA Memory as a public beta feature. The update includes bug fixes for API spec errors, restored missing features, and performance improvements for multipart file requests.

changelog

Anthropic reverts three system changes that degraded Claude Code performance in March and April

Anthropic confirmed three separate system changes in March and April degraded Claude Code, Claude Agent SDK, and Claude Cowork performance. The company reduced default reasoning effort from high to medium on March 4, introduced a caching bug on March 26 that cleared session data with every turn, and added restrictive word limits on April 16 that caused a 3% performance drop.

product update

Anthropic adds 15 lifestyle app integrations to Claude, including Spotify, Instacart, and Uber

Anthropic has expanded Claude's integration directory to include 15 lifestyle services including Spotify, Instacart, AllTrails, Uber, and Booking.com. The update shifts Claude's third-party connectivity from professional and educational tools to personal use cases, with apps now appearing dynamically within conversations.

product update

Anthropic adds personal app connectors to Claude for Spotify, Uber Eats, TurboTax

Anthropic has released app connectors allowing Claude to integrate directly with personal services including Spotify, Uber, Uber Eats, Instacart, TurboTax, Audible, AllTrails, and TripAdvisor. The connectors are available now across all Claude plans, with mobile support in beta.

Comments

Loading...