model releaseAnthropic

Anthropic restricts Claude Mythos access, exposing Europe's lack of AI safety infrastructure

TL;DR

Anthropic is restricting access to Claude Mythos Preview, a model the company claims can find security vulnerabilities better than most humans, to 52 technology partners. While the UK's AI Security Institute has already tested Mythos and published findings, most European cybersecurity agencies have limited or no access, revealing a structural gap in Europe's AI safety evaluation capacity.

3 min read
0

Anthropic restricts Claude Mythos Preview to select partners

Anthropic announced it will limit access to Claude Mythos Preview to a select group of technology partners, citing unprecedented cybersecurity risks. Under a program called "Project Glasswing," the company granted access to 12 US tech companies including Apple, Microsoft, and Amazon, plus 40 additional unnamed organizations.

The company claims the model can find security vulnerabilities better than most humans and could increase the likelihood of large-scale AI-powered cyberattacks.

Europe's limited visibility into the model

According to POLITICO, officials from eight national European cybersecurity agencies reported minimal contact with Anthropic regarding Mythos. Only Germany's BSI confirmed opening talks with Anthropic without receiving direct testing access. BSI chief Claudia Plattner called it an urgent question whether such tools would become available on the open market, with profound implications for national and European security.

The EU's cybersecurity agency ENISA declined to comment on whether it is in contact with Anthropic. The EU Commission's AI Office maintains dialogue with Anthropic as part of the EU Code of Practice for AI models, but whether Mythos is part of those conversations or whether the office has received access remains unanswered.

UK's AISI already tested and assessed the model

The UK's AI Security Institute (AISI) tested Mythos and published its assessment on Monday. UK AI Minister Kanishka Narayan confirmed the institute had taken action based on its findings.

According to the AISI assessment, "Mythos Preview represents a significant leap over previous frontier models in a landscape where cyber capabilities were already advancing rapidly." The institute noted it couldn't confirm with certainty whether Mythos could successfully attack well-defended systems.

Structural gap in European AI safety capacity

The UK's AISI, founded in 2023 with 100 million pounds in public funding, employs more than 100 technical staff and has tested at least 16 models, including three frontier models before public launch. It has recruited high-profile researchers from OpenAI and Google DeepMind.

The EU AI Office has more than 125 staff but faced major hiring difficulties as recently as fall 2024, according to Transformer News. Rigid pay structures limit its ability to attract private-sector talent, with hiring processes dragging on for months. Individual EU member states have built parallel structures: France launched INESIA in early 2025, and Spain monitors AI deployment through AESIA.

Industry and expert reactions

Daniel Privitera, founder of Berlin-based AI nonprofit KIRA, told POLITICO that "Mythos offers an early taste of how critical access to frontier AI capabilities will be in the years ahead. Europe currently has no plan for securing that access."

AI pioneer Yoshua Bengio called it "deeply concerning that tech companies, not regulators, are deciding how to handle these risks." Former European Parliament member Marietje Schaake, who helped shape the EU's Code of Practice for AI developers, said "now would be a good time to agree on disclosure rules and oversight mechanisms."

Regulatory implications

Under the EU AI Act, providers like Anthropic must address cyber risks posed by their models. The Cyber Resilience Act sets mandatory cybersecurity requirements for products with digital components sold in the EU market.

According to EU guidelines, internal use of an AI model counts as placing it on the market if that use is essential to providing a product or service in the EU. EU Commission digital spokesperson Thomas Regnier said the Commission is examining possible implications under EU legislation.

What this means

The Mythos situation reveals that Europe's AI safety challenge isn't primarily about regulation—Anthropic signed the EU Code of Practice along with other major AI companies. The core problem is Europe's lack of technical infrastructure and talent to conduct independent safety evaluations at the frontier level. The UK's AISI demonstrates what's possible with focused funding, competitive salaries, and streamlined hiring, giving it credibility to secure early access. Europe's fragmented approach—with the underfunded EU AI Office and separate national institutes—leaves the region unable to assess cutting-edge models that could pose cybersecurity risks. This structural weakness becomes more critical as AI capabilities advance and potentially adversarial actors like DeepSeek enter the frontier model space.

Related Articles

benchmark

Claude Mythos achieves 73% success rate on expert-level hacking challenges, completes full network takeover in 3 of 10 a

The UK's AI Safety Institute reports Claude Mythos Preview achieved a 73% success rate on expert-level capture-the-flag cybersecurity challenges and became the first AI model to complete a full 32-step simulated corporate network takeover, succeeding in 3 out of 10 attempts. The testing occurred in environments without active security monitoring or defenders.

model release

Anthropic launches Mythos AI model claiming zero-day vulnerability discovery capabilities

Anthropic has launched Mythos, an AI model the company claims can identify and exploit zero-day vulnerabilities with significant capability. The model has not been released publicly, with Anthropic citing security concerns. The announcement raises questions about the model's actual capabilities versus pre-IPO positioning.

model release

Trump officials encourage banks to test Anthropic's Mythos model for security vulnerabilities

U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned bank executives this week and encouraged them to test Anthropic's newly announced Mythos model for detecting security vulnerabilities. According to Bloomberg, major banks including Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are already testing the model alongside JPMorgan Chase, despite Anthropic's stated plan to limit initial access.

model release

White House officials questioned tech CEOs on AI security ahead of Anthropic's Mythos release

Vice President JD Vance and Treasury Secretary Scott Bessent held a call with leading tech CEOs including Anthropic's Dario Amodei, OpenAI's Sam Altman, and Google's Sundar Pichai to discuss AI model security and cyber attack response. The meeting occurred one week before Anthropic released its Mythos model, which has major cybersecurity implications and raised concerns at the Federal Reserve and among top U.S. banks.

Comments

Loading...