Breaking

Google releases Gemma 4, open-source on-device AI with agentic tool use for phones

Google released Gemma 4, an open-source multimodal model that runs entirely on smartphones without sending data to the cloud. The E2B and E4B variants require just 6GB and 8GB of RAM respectively and can autonomously use tools like Wikipedia, maps, and QR code generators through built-in agent skills. The model is available free via the Google AI Edge Gallery app for Android and iOS.

April 11, 2026

Latest News

All news →
0
benchmark

AI models guess instead of asking for help, ProactiveBench study shows

Researchers introduced ProactiveBench, a benchmark testing whether multimodal language models ask for help when visual information is missing. Out of 22 models tested—including GPT-4.1, GPT-5.2, and o4-mini—almost none proactively request clarification, instead hallucinating or refusing to respond. A reinforcement learning approach showed models can be trained to ask for help, improving performance from 17.5% to 37-38%, though significant gaps remain.

0
product updateAnthropic

Anthropic adds Ultraplan to Claude Code, moving task planning to the cloud

Anthropic has launched Ultraplan, a new feature for Claude Code that offloads programming task planning to the cloud. The feature enables developers to initiate planning jobs from the terminal while the planning executes in the browser, supporting inline comments, emoji reactions, and revision requests on individual plan sections.

0
model release

Liquid AI releases LFM2.5-VL-450M, improved 450M-parameter vision-language model with multilingual support

Liquid AI has released LFM2.5-VL-450M, a refreshed 450M-parameter vision-language model built on an updated LFM2.5-350M backbone. The model features a 32,768-token context window, supports 9 languages, handles native 512×512 pixel images, and adds bounding box prediction and function calling capabilities. Performance improvements span both vision and language benchmarks compared to its predecessor.

1
model releaseAnthropic

White House officials questioned tech CEOs on AI security ahead of Anthropic's Mythos release

Vice President JD Vance and Treasury Secretary Scott Bessent held a call with leading tech CEOs including Anthropic's Dario Amodei, OpenAI's Sam Altman, and Google's Sundar Pichai to discuss AI model security and cyber attack response. The meeting occurred one week before Anthropic released its Mythos model, which has major cybersecurity implications and raised concerns at the Federal Reserve and among top U.S. banks.

3 min readvia cnbc.com
0
model releaseTencent

Tencent releases HY-Embodied-0.5, a 2B-parameter vision-language model for robot control

Tencent has released HY-Embodied-0.5, a family of foundation models designed specifically for embodied AI and robotic control. The suite includes a 2B-parameter MoT (Mixture-of-Transformers) variant with only 2.2B activated parameters during inference, and a 32B model that claims frontier-level performance comparable to Gemini 3.0 Pro, trained on over 200 billion tokens of embodied-specific data.

0
product updateMicrosoft

Microsoft removes Copilot branding from Windows 11 apps after user backlash

Microsoft has begun removing Copilot branding and buttons from Windows 11 applications including Notepad and Snipping Tool, replacing them with generic icons like a pen symbol. The underlying AI-powered features remain functional but are now labeled as "writing tools" rather than Copilot. This follows user complaints about forced Copilot integration and inconsistent experiences across apps.

2 min readvia engadget.com
0
product updateMicrosoft

Microsoft removes Copilot buttons from Windows 11 apps, keeps AI features

Microsoft is removing Copilot buttons from Windows 11 apps including Notepad, Snipping Tool, Photos, and Widgets as part of a broader effort to reduce "unnecessary Copilot entry points." The underlying AI features remain intact, with Notepad's Copilot button replaced by a "writing tools" menu that retains the same AI-powered functionality.

2 min readvia theverge.com
0
model release

Meta launches proprietary Muse Spark, abandoning open-source strategy after $14.3B rebuild

Meta launched Muse Spark on April 8, 2026, a natively multimodal reasoning model with tool-use and visual chain-of-thought capabilities. Unlike Llama, it is entirely proprietary with no open weights. The model scores 52 on AI Index v4.0 and excels on health benchmarks but represents Meta's departure from its open-source identity.

Latest Models

All →