Anthropic's Mythos model triggers 11% drop in cybersecurity stocks over hacking concerns
Cybersecurity stocks fell sharply Friday after reports that Anthropic is testing Mythos, described as its most powerful model yet, with enhanced capabilities that present potential security risks. CrowdStrike and Palo Alto Networks dropped 7%, while Tenable fell nearly 11%. Anthropic plans a cautious rollout due to cybersecurity implications.
Anthropic's Mythos Model Triggers Cybersecurity Stock Sell-Off
Cybersecurity stocks declined sharply Friday following reports that Anthropic is testing Mythos, its most advanced AI model to date, which poses potential security risks due to enhanced cyber capabilities.
Market Impact
The iShares Cybersecurity ETF lost 3% on the news. Major sector players experienced steeper declines:
- CrowdStrike and Palo Alto Networks: 7% drop
- Zscaler and SentinelOne: 8% decline
- Tenable: 11.7% drop
- Okta and Netskope: 6% decline
The sell-off reflects growing investor concern about AI-driven disruption in the cybersecurity space, which has faced consistent pressure this year from fears that advanced AI models could undermine traditional security tools.
About Mythos
According to Fortune, which first reported the news Thursday citing a publicly accessible draft blog post, Mythos represents Anthropic's most powerful model. The company is planning a deliberately slow rollout due to potential cybersecurity implications, suggesting internal concerns about misuse or unintended consequences.
Anthropric did not immediately respond to CNBC's request for comment, leaving technical specifications unconfirmed. The company has not disclosed details about Mythos's parameters, context window, pricing, or exact capabilities.
Broader AI-Cybersecurity Tension
This marks the second major cybersecurity stock decline tied to Anthropic in recent weeks. Last month, cyber stocks fell after Anthropic announced a code-scanning security tool integrated into Claude, demonstrating how AI advances—even those designed to improve security—can unsettle the sector.
The threat landscape is shifting rapidly. Anthropic disclosed in November that a state-sponsored group in China used Claude to automate cyberattacks, highlighting real-world risks as AI models become more capable at executing complex technical tasks.
AI and autonomous agents are making cyberattacks simultaneously more sophisticated and easier to launch, forcing cybersecurity vendors to innovate faster or face obsolescence. The sector faces a paradox: the same AI capabilities that concern investors could eventually become necessary defensive tools.
What This Means
The Mythos reports expose a fundamental market anxiety about AI disruption in cybersecurity—not because the model exists, but because it raises questions about whether traditional endpoint and network security tools remain competitive as AI capabilities expand. Anthropic's cautious rollout approach signals the company recognizes dual-use risks, but the market's response shows investors are pricing in potential industry upheaval regardless of deployment speed. Cybersecurity vendors must accelerate AI integration into their own offerings or risk becoming obsolete within the next 18-24 months.
Related Articles
OpenAI offers EU preview access to GPT-5.5-Cyber model while Anthropic withholds Mythos
OpenAI announced GPT-5.5-Cyber is rolling out in limited preview to vetted cybersecurity teams and is in discussions with the European Commission about preview access. Anthropic released its Mythos model a month ago but has yet to grant EU access for security review.
OpenAI releases GPT-5.5-Cyber for vetted security teams with relaxed safeguards
OpenAI released GPT-5.5-Cyber in limited preview on Thursday, a variant of its GPT-5.5 model with relaxed safeguards for vetted cybersecurity teams. The model is trained to be more permissive on security-related tasks including vulnerability identification, patch validation, and malware analysis.
OpenAI Opens GPT-5.5-Cyber to Vetted Defenders After Model Matches Anthropic's Mythos in Security Testing
OpenAI is providing a less-restricted version of GPT-5.5 to vetted cybersecurity defenders through its Trusted Access for Cyber program. The model, dubbed GPT-5.5-Cyber, completed a 32-step simulated corporate cyberattack in 2 out of 10 test runs according to the U.K. AI Security Institute, narrowly trailing Anthropic's Mythos which succeeded in 3 out of 10 attempts.
Anthropic's Mythos model finds thousands of high-severity bugs in Firefox, including 15-year-old vulnerabilities
Mozilla's Firefox team reports that Anthropic's Mythos model has discovered thousands of high-severity security vulnerabilities, including bugs that had remained undetected for more than 15 years. In April 2026, Firefox shipped 423 bug fixes compared to just 31 in April 2025, marking a 13x increase attributed to AI-assisted vulnerability detection.
Comments
Loading...