Breaking

Google Gemma 4 Runs Locally on Edge Devices, Creating Enterprise Security Blind Spot

Google released Gemma 4, an open-weights model family that runs directly on edge devices with multi-step planning and autonomous workflow capabilities. The Apache 2.0 licensed model bypasses traditional cloud security controls by executing entirely on local hardware, creating a governance blind spot for enterprise security teams.

April 13, 2026

Latest News

All news →
0
model releaseAnthropic

Anthropic launches Mythos AI model claiming zero-day vulnerability discovery capabilities

Anthropic has launched Mythos, an AI model the company claims can identify and exploit zero-day vulnerabilities with significant capability. The model has not been released publicly, with Anthropic citing security concerns. The announcement raises questions about the model's actual capabilities versus pre-IPO positioning.

2 min readvia go.theregister.com
0
model releaseAnthropic

Trump officials encourage banks to test Anthropic's Mythos model for security vulnerabilities

U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned bank executives this week and encouraged them to test Anthropic's newly announced Mythos model for detecting security vulnerabilities. According to Bloomberg, major banks including Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are already testing the model alongside JPMorgan Chase, despite Anthropic's stated plan to limit initial access.

2 min readvia techcrunch.com
0
analysis

Enterprise AI gap widens as open-weight models mature into production-ready alternatives

Open-weight models from Google, Alibaba, Microsoft, and Nvidia have crossed a threshold from research projects to enterprise-grade systems. The shift reflects a growing divide: frontier models from OpenAI and Anthropic are too expensive and pose data security risks for most enterprises, while open alternatives now deliver sufficient capability at a fraction of the cost.

0
researchAnthropic

AI agent skills fail in real-world conditions, researchers find testing 34,000 skills

A large-scale study testing 34,198 real-world skills reveals that AI agent performance drops drastically when moving from curated benchmarks to realistic conditions. Claude Opus 4.6 saw pass rates fall from 55.4% with hand-selected skills to 38.4% in truly realistic scenarios, while weaker models like Kimi K2.5 actually perform below their no-skill baseline.

3 min readvia the-decoder.com
0
model releaseArcee Ai

Arcee AI releases Trinity-Large-Thinking, open reasoning model matching Claude Opus on agent tasks

Arcee AI has released Trinity-Large-Thinking, a 400-billion-parameter open-weight reasoning model with a mixture-of-experts architecture that activates only 13 billion parameters per token. The model matches Claude Opus 4.6 on agent benchmarks like Tau2 and PinchBench but lags on general reasoning tasks. The company spent approximately $20 million—roughly half its total venture capital—to train the model on 2,048 Nvidia B300 GPUs over 33 days.

3 min readvia the-decoder.com
0
model release

MiniMax releases M2.7, a 229B parameter model with self-evolving capabilities and agent teams

MiniMax has released MiniMax-M2.7, a 229-billion parameter model that uniquely participates in its own evolution during development. The model achieves 66.6% medal rate on MLE Bench Lite and 56.22% on SWE-Pro benchmarks, with native support for multi-agent collaboration and complex tool orchestration.

0
model release

Google releases Gemma 4, open-source on-device AI with agentic tool use for phones

Google released Gemma 4, an open-source multimodal model that runs entirely on smartphones without sending data to the cloud. The E2B and E4B variants require just 6GB and 8GB of RAM respectively and can autonomously use tools like Wikipedia, maps, and QR code generators through built-in agent skills. The model is available free via the Google AI Edge Gallery app for Android and iOS.

3 min readvia the-decoder.com
0
benchmark

AI models guess instead of asking for help, ProactiveBench study shows

Researchers introduced ProactiveBench, a benchmark testing whether multimodal language models ask for help when visual information is missing. Out of 22 models tested—including GPT-4.1, GPT-5.2, and o4-mini—almost none proactively request clarification, instead hallucinating or refusing to respond. A reinforcement learning approach showed models can be trained to ask for help, improving performance from 17.5% to 37-38%, though significant gaps remain.

0
product updateAnthropic

Anthropic adds Ultraplan to Claude Code, moving task planning to the cloud

Anthropic has launched Ultraplan, a new feature for Claude Code that offloads programming task planning to the cloud. The feature enables developers to initiate planning jobs from the terminal while the planning executes in the browser, supporting inline comments, emoji reactions, and revision requests on individual plan sections.

0
model release

Liquid AI releases LFM2.5-VL-450M, improved 450M-parameter vision-language model with multilingual support

Liquid AI has released LFM2.5-VL-450M, a refreshed 450M-parameter vision-language model built on an updated LFM2.5-350M backbone. The model features a 32,768-token context window, supports 9 languages, handles native 512×512 pixel images, and adds bounding box prediction and function calling capabilities. Performance improvements span both vision and language benchmarks compared to its predecessor.

Latest Models

All →