OpenAI offers EU preview access to GPT-5.5-Cyber model while Anthropic withholds Mythos
OpenAI announced GPT-5.5-Cyber is rolling out in limited preview to vetted cybersecurity teams and is in discussions with the European Commission about preview access. Anthropic released its Mythos model a month ago but has yet to grant EU access for security review.
OpenAI announced Thursday that GPT-5.5-Cyber, a variation of its latest AI model, is rolling out in limited preview capacity to vetted cybersecurity teams. The company is now in discussions with the European Union about providing preview access to the model for security review.
"We welcome OpenAI's transparency and intent to give commission access to new model," Commission Spokesperson Thomas Regnier said at a press briefing Monday. He confirmed an exchange had taken place between OpenAI and the EU, with further discussions planned this week around access to the model.
Anthropic holds back on Mythos access
Anthropic released its Mythos model a month before OpenAI's GPT-5.5-Cyber announcement. According to Regnier, Mythos prompted "a wave of fears around cyberattacks on critical software." Despite this, Anthropic has not granted the EU preview access to review the model.
The EU is in discussions with Anthropic about access, but Regnier noted these talks are at a "different stage" than those with OpenAI. Neither OpenAI nor Anthropic has commented publicly on the matter.
Limited rollout for cybersecurity review
GPT-5.5-Cyber represents a specialized variant of OpenAI's latest model line, specifically designed or adapted for cybersecurity applications. The limited preview rollout restricts access to vetted cybersecurity teams, suggesting heightened concerns about potential misuse.
The European Commission's request for preview access reflects growing regulatory scrutiny of AI models with potential security implications. "This will allow us to follow deployment of the model very closely, and address security concerns," Regnier said.
What this means
The divergent approaches from OpenAI and Anthropic reveal different strategies for handling regulatory engagement on sensitive AI capabilities. OpenAI's proactive engagement with EU regulators contrasts with Anthropic's month-long delay in providing access, despite Mythos apparently raising more immediate security concerns. This dynamic could influence future AI regulation in Europe and set precedents for how AI labs coordinate with governments on models with dual-use potential. The EU's emphasis on preview access suggests regulators are moving toward more active oversight before models reach wide deployment.
Related Articles
OpenAI Opens GPT-5.5-Cyber to Vetted Defenders After Model Matches Anthropic's Mythos in Security Testing
OpenAI is providing a less-restricted version of GPT-5.5 to vetted cybersecurity defenders through its Trusted Access for Cyber program. The model, dubbed GPT-5.5-Cyber, completed a 32-step simulated corporate cyberattack in 2 out of 10 test runs according to the U.K. AI Security Institute, narrowly trailing Anthropic's Mythos which succeeded in 3 out of 10 attempts.
OpenAI releases GPT-5.5-Cyber for vetted security teams with relaxed safeguards
OpenAI released GPT-5.5-Cyber in limited preview on Thursday, a variant of its GPT-5.5 model with relaxed safeguards for vetted cybersecurity teams. The model is trained to be more permissive on security-related tasks including vulnerability identification, patch validation, and malware analysis.
Anthropic's Mythos model finds thousands of high-severity bugs in Firefox, including 15-year-old vulnerabilities
Mozilla's Firefox team reports that Anthropic's Mythos model has discovered thousands of high-severity security vulnerabilities, including bugs that had remained undetected for more than 15 years. In April 2026, Firefox shipped 423 bug fixes compared to just 31 in April 2025, marking a 13x increase attributed to AI-assisted vulnerability detection.
OpenAI launches Trusted Contact feature allowing ChatGPT to alert designated friends during suicide risk
OpenAI has launched Trusted Contact for ChatGPT, allowing users 18+ to designate one adult contact who can be notified if the company's trained human review team detects serious self-harm risk. The feature comes after over 1 million of ChatGPT's 800 million weekly users expressed suicidal thoughts in conversations, and follows a 2025 wrongful death lawsuit.
Comments
Loading...