Google names upcoming Gemini AI agent 'Spark,' adds autonomous task execution to mobile app
Google is preparing to launch Gemini Spark, an autonomous AI agent that will operate within the Gemini mobile app. According to code found in Google app beta version 17.23, Spark can access connected apps, personal data, and location to execute tasks like managing inboxes and scheduling meetings, though Google warns it may occasionally act without permission.
Google names upcoming Gemini AI agent 'Spark,' adds autonomous task execution to mobile app
Google is preparing to launch Gemini Spark, an autonomous AI agent that will operate within the Gemini mobile app, according to code discovered in Google app beta version 17.23 released today.
The agent, previously referred to internally as "Gemini Agent," will appear as a separate section in the Gemini app's navigation drawer with a two-tab layout split between "Chat" and "Agent." Users will be able to create, schedule, and monitor active tasks through a dedicated interface.
Data access and capabilities
Gemini Spark will access user information from connected apps, chat history, tasks, logged-in websites, Personal Intelligence data, and location data. According to disclosures in the app code, the agent "can share necessary info with third parties" to complete tasks, including names, contact information, files, preferences, and potentially sensitive information.
Confirmed capabilities include:
- Email management: summarizing newsletters, archiving messages, and unsubscribing from email lists
- Meeting preparation: generating briefs with relevant information before scheduled meetings
- News curation: creating custom news digests and tracking story developments over time
Experimental status and warnings
Google labels Gemini Spark as "experimental" in the code strings. The company warns that "while it is designed to ask for your permission before taking sensitive actions, it may do things like share your info or make purchases without asking." Users are advised to supervise the agent and avoid relying on it for medical, legal, financial, or professional advice.
The feature is already appearing for some beta testers ahead of Google I/O 2026, though no official release date has been announced.
What this means
Gemini Spark represents Google's entry into autonomous AI agents that can execute multi-step tasks across apps without continuous user input. The disclosure that it may occasionally act without permission highlights the control challenges facing agent systems as they gain more autonomy. The timing ahead of I/O 2026 suggests Google plans a formal announcement at its developer conference, likely competing with similar agent features from OpenAI and Anthropic.
Related Articles
Google embeds Gemini across Android as multimodal agent before Apple's WWDC AI reveal
Google is rolling out Gemini-powered features across Android that enable the AI to understand screen context and complete multi-step tasks across apps, marking a shift from traditional assistant interactions to agentic capabilities. The updates will launch on Samsung Galaxy and Google Pixel phones this summer before expanding to watches, cars, and laptops.
Google launches Gemini Intelligence for Android, enabling multi-app task automation
Google announced Gemini Intelligence at I/O 2026, a system-level AI layer that automates multi-step tasks across Android apps. Rolling out first to Samsung Galaxy and Pixel phones this summer, it enables the OS to understand screen context and execute complex workflows without manual app-switching.
Notion launches Developer Platform with custom code execution, agent orchestration, and database sync
Notion has launched a Developer Platform that allows teams to run custom code in cloud-based Workers, sync external databases, and orchestrate both internal and external AI agents. The platform, free through August, supports integration with Claude Code, Cursor, Codex, and Decagon, and uses Model Context Protocol for agent connectivity.
Google DeepMind launches 'Magic Pointer' AI feature for context-aware interactions across web pages
Google DeepMind has detailed Magic Pointer, an AI feature that interprets visual and semantic context around cursor position to enable natural language interactions. The capability is rolling out to Gemini in Chrome and includes two public demos in AI Studio for image editing and map search.
Comments
Loading...