product updateAnthropic

Anthropic's Claude experiences outage as GitHub issues citing quality concerns surge 3.5× since January

TL;DR

Anthropic's Claude.ai and Claude Code suffered a 48-minute outage on April 13, 2026, from 15:31 to 16:19 UTC with elevated error rates. GitHub quality complaints have increased 3.5× from the January-February baseline, though SWE-Bench-Pro scores show no substantive change since February.

2 min read
0

Anthropic's Claude experiences outage as GitHub issues citing quality concerns surge 3.5× since January

Anthropic's Claude.ai and Claude Code experienced a major outage on April 13, 2026, with elevated error rates affecting both services from 15:31 to 16:19 UTC—a 48-minute disruption that compounds growing user complaints about output quality.

GitHub issues mentioning quality concerns in the Claude Code repository have increased sharply since January 2026. According to an analysis conducted by Claude itself on the repository's open issues, quality complaints jumped from a January-February baseline to 18 issues in March, with April already recording 20+ quality issues in the first 13 days—a 3.5× increase over the earlier baseline.

Capacity management and user complaints

The quality concerns have emerged as Anthropic has implemented measures to reduce usage during peak hours to balance capacity and demand. Users have filed issues including "Claude Code is unusable for complex engineering tasks with the Feb updates" (#42796, addressed by Claude Code head Boris Cherny), "Artificial degradation, Acquisition Bias, and unacceptable compute throttling for paid users" (#46949), and "Opus 4.6: Severe quality degradation on iterative coding tasks" (#46099).

One unverified claim alleges that "Claude autonomously deleted 35,254 production customer message records and 35,874 billing transactions" for a company called JIXEN. The Register reported it has not been able to substantiate this claim, and the account making the claim has made no other posts. User error has not been ruled out in reported data loss incidents.

Benchmark performance remains stable

Despite the increase in user-reported quality issues, data from Margin Lab shows Claude Opus 4.6 has maintained its score on the SWE-Bench-Pro test. Assessments conducted since February 2026 show some variation but no substantive change in performance on this benchmark.

The Claude Code GitHub repository automatically closes issues after a period of inactivity via GitHub Actions, which may mask the extent of unresolved problems. Many issues may now be AI-generated—a widely reported concern among open source developers—potentially contributing to increased report volume.

Anthropic did not respond to a request for comment on the quality concerns or the April 13 outage.

What this means

The disconnect between stable benchmark scores and surging user complaints suggests either benchmark limitations in capturing real-world performance degradation, or user perception issues driven by capacity throttling and service reliability problems. The 48-minute outage and acknowledged peak-hour usage reduction indicate Anthropic is managing infrastructure constraints while trying to maintain service quality—a balancing act that appears to be eroding user confidence regardless of what automated benchmarks show.

Related Articles

product update

Microsoft developing local AI agent to compete with open-source OpenClaw

Microsoft is testing OpenClaw-like features for Microsoft 365 Copilot aimed at enterprise customers, the company confirmed to The Information. The agent would run continuously to complete multi-step tasks over extended periods, distinguishing it from Microsoft's existing cloud-based agents like Copilot Cowork and Copilot Tasks.

product update

Anthropic completes Microsoft Office integration with Claude for Word add-in

Anthropic released a Claude add-in for Microsoft Word, completing its integration across all three major Office applications. The Word add-in joins existing Excel and PowerPoint add-ins, allowing Claude to rewrite text, respond to comments, and track changes across Office documents.

research

AI agent skills fail in real-world conditions, researchers find testing 34,000 skills

A large-scale study testing 34,198 real-world skills reveals that AI agent performance drops drastically when moving from curated benchmarks to realistic conditions. Claude Opus 4.6 saw pass rates fall from 55.4% with hand-selected skills to 38.4% in truly realistic scenarios, while weaker models like Kimi K2.5 actually perform below their no-skill baseline.

model release

Anthropic launches Mythos AI model claiming zero-day vulnerability discovery capabilities

Anthropic has launched Mythos, an AI model the company claims can identify and exploit zero-day vulnerabilities with significant capability. The model has not been released publicly, with Anthropic citing security concerns. The announcement raises questions about the model's actual capabilities versus pre-IPO positioning.

Comments

Loading...