model releaseAnthropic

Anthropic's unreleased Mythos model enables autonomous large-scale cyberattacks, officials warn

TL;DR

Anthropic is privately warning top government officials that its unreleased model "Mythos" makes large-scale cyberattacks significantly more likely in 2026. The model enables AI agents to operate autonomously with high sophistication to penetrate corporate, government and municipal systems. One official told Axios a large-scale attack could occur this year as employees unknowingly create security vulnerabilities through unsupervised agentic AI use.

2 min read
0

Anthropic's Unreleased Mythos Model Enables Autonomous Large-Scale Cyberattacks, Officials Warn

Anthropicis privately warning top government officials that its unreleased model codenamed "Mythos" makes large-scale cyberattacks dramatically more likely in 2026, according to briefings shared with Axios CEO Jim VandeHei.

The model enables AI agents to operate independently with sophisticated precision to penetrate corporate, government and municipal systems at scale. According to sources briefed on the coming models, a large-scale attack powered by such technology could occur as soon as 2026, with businesses identified as primary targets.

Mythos Capabilities and Threat Profile

Fortune obtained an unpublished Anthropic blog post describing Mythos as "currently far ahead of any other AI model in cyber capabilities." The post states the model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."

Unlike previous cyberattack methods, Mythos-powered agents can think, act, reason and improvise independently without rest or pause. The threat model resembles an infinite warehouse of the most sophisticated criminals that never sleep, learn on the fly, and persist until successful. Bad actors can now scale attacks simply by adding compute—a single person can execute campaigns that once required entire teams.

Precedent: AI-Powered Cyberattack Already Documented

This isn't theoretical. Late last year, Anthropic disclosed the first documented case of a cyberattack largely executed by AI: a Chinese state-sponsored group used AI agents to autonomously hack approximately 30 global targets, with the AI handling 80-90% of tactical operations independently. That occurred before agents reached their current sophistication level.

Shadow AI: The Multiplier Problem

The threat is being amplified by "shadow AI"—employees deploying Claude, Copilot and other agentic models, often at home, without realizing they're creating security vulnerabilities. Many unknowingly connect these agents to internal work systems, opening doors for cybercriminals.

A Dark Reading poll found that 48% of cybersecurity professionals now rank agentic AI as the #1 attack vector for 2026—above deepfakes and all other threats.

What This Means

The convergence of highly capable agentic models with widespread unsupervised deployment creates an unprecedented attack surface. Organizations need immediate security policies restricting unsupervised agent use near sensitive systems. The threat window is now—not theoretical future risk. Companies should establish isolated "playpen" environments for AI experimentation and mandate employee training on agentic AI dangers before deployment accelerates further.

Related Articles

model release

Anthropic's Mythos model triggers 11% drop in cybersecurity stocks over hacking concerns

Cybersecurity stocks fell sharply Friday after reports that Anthropic is testing Mythos, described as its most powerful model yet, with enhanced capabilities that present potential security risks. CrowdStrike and Palo Alto Networks dropped 7%, while Tenable fell nearly 11%. Anthropic plans a cautious rollout due to cybersecurity implications.

product update

Claude adds memory import tool to help users switch from ChatGPT and other AI services

Anthropic has launched a memory import feature for Claude that lets users transfer their stored preferences, personal details, and conversation context from other AI services like ChatGPT, Google Gemini, and Microsoft Copilot. The tool generates copy-paste instructions that extract all memories from a competing service and import them into Claude, eliminating the need to rebuild your AI profile from scratch.

model release

Anthropic confirms leaked model represents major reasoning advance after security breach

A data breach at Anthropic exposed internal documents detailing an unreleased AI model the company describes as its most powerful to date. Anthropic confirmed it is already testing the model with select customers, claiming significant advances in reasoning, coding, and cybersecurity. The breach resulted from a misconfiguration in Anthropic's content management system that automatically made ~3,000 uploaded files publicly accessible.

changelog

Anthropic reduces Claude usage allowances during peak hours to manage capacity

Anthropic on Wednesday adjusted Claude's session limits for Free, Pro, and Max subscribers during peak demand hours (05:00-11:00 PT / 13:00-19:00 GMT). Users can now consume five hours of allowance in under five hours during these periods, while off-peak usage maintains standard pacing. Approximately 7% of Pro tier users will hit limits they previously wouldn't have encountered.

Comments

Loading...