Google embeds Gemini across Android as multimodal agent before Apple's WWDC AI reveal
Google is rolling out Gemini-powered features across Android that enable the AI to understand screen context and complete multi-step tasks across apps, marking a shift from traditional assistant interactions to agentic capabilities. The updates will launch on Samsung Galaxy and Google Pixel phones this summer before expanding to watches, cars, and laptops.
Google embeds Gemini across Android as multimodal agent before Apple's WWDC AI reveal
Google is deploying Gemini as an operating-layer AI agent across Android devices, enabling multi-app task completion and screen-aware assistance ahead of its I/O developer conference next week and Apple's expected AI announcements at WWDC.
Core capabilities
According to Google, Gemini Intelligence on Android will:
- Read and understand on-screen content across apps
- Complete multi-step tasks like pulling guest lists from Gmail, generating menus, and populating Instacart shopping carts
- Operate across phones, watches, cars, glasses, and laptops
- Request user approval before completing transactions
Sameer Samat, who oversees Google's Android ecosystem, told CNBC the company is "transitioning from an operating system to an intelligence system," with Gemini serving as the AI foundation. He emphasized that "the human is always in the loop" for agent-driven actions.
Rollout timeline
The app automation features will deploy in phases:
- Summer 2026: Samsung Galaxy and Google Pixel phones
- Later 2026: Expansion to watches, cars, glasses, and laptops
Google is also redesigning Android Auto around Gemini, bringing the assistant to more than 250 million vehicles with what the company claims is its biggest maps update in a decade.
Competitive context
The announcement comes four months after Apple signed a deal with Google to power portions of Apple Intelligence with Gemini. Apple is expected to demonstrate an upgraded version of Apple Intelligence at WWDC, putting pressure on both companies to prove their AI integration strategies.
Google's approach focuses on cross-app coordination and contextual understanding, while Apple has historically emphasized privacy, hardware integration, and user experience control.
Market response
Alphabet's stock has risen more than 140% over the past year as investors respond to its AI strategy, compared to Apple's roughly 40% gain over the same period.
What this means
Google is attempting to establish Gemini as the default AI layer for Android before Apple can demonstrate its own vision for on-device intelligence at WWDC. The "human in the loop" requirement for transactions suggests Google is taking a measured approach to agentic AI, requiring explicit approval for actions that involve spending money or making commitments. The rollout timeline—starting with flagship Samsung and Pixel devices—indicates Google is prioritizing demonstration over immediate broad availability. Whether users will trust an AI to build shopping carts and book reservations remains untested at scale.
Related Articles
Google launches Gemini Intelligence for Android, enabling multi-app task automation
Google announced Gemini Intelligence at I/O 2026, a system-level AI layer that automates multi-step tasks across Android apps. Rolling out first to Samsung Galaxy and Pixel phones this summer, it enables the OS to understand screen context and execute complex workflows without manual app-switching.
Google announces Googlebooks laptop platform with Gemini AI integration, launching fall 2026
Google previewed Googlebooks, a new laptop platform combining Android and ChromeOS with Gemini AI at its core. The platform features AI capabilities like Magic Pointer for contextual assistance and seamless Android phone integration. Hardware partners include Acer, Asus, Dell, HP, and Lenovo, with devices launching fall 2026.
Xcode 26.5 adds message queuing and clarifying questions for AI coding assistants
Apple released Xcode 26.5 with two new Coding Intelligence features: the ability to queue multiple messages to AI coding assistants without waiting for responses, and agent support for asking clarifying questions before executing tasks. The update builds on agentic coding capabilities introduced in Xcode 26.3, which allowed developers to integrate tools like OpenAI Codex and Anthropic's Claude directly into the IDE.
Notion launches Developer Platform with custom code execution, agent orchestration, and database sync
Notion has launched a Developer Platform that allows teams to run custom code in cloud-based Workers, sync external databases, and orchestrate both internal and external AI agents. The platform, free through August, supports integration with Claude Code, Cursor, Codex, and Decagon, and uses Model Context Protocol for agent connectivity.
Comments
Loading...