Google Gemini task automation now works on phones, taking 9 minutes to order dinner
Google has launched task automation for Gemini on Pixel 10 Pro and Galaxy S26 Ultra, allowing the AI to autonomously use apps for food delivery and rideshare services. The feature works but is slow—taking approximately nine minutes to complete an order—and remains limited to a small beta subset of apps. Despite performance limitations, it represents the first practical demonstration of an AI assistant actually controlling a phone outside of controlled demos.
Google Gemini Task Automation Now Works on Phones—Slowly
Google has rolled out task automation for Gemini on Pixel 10 Pro and Galaxy S26 Ultra, marking the first real-world deployment of an AI assistant that can actually use apps on your phone. The feature is currently in beta and limited to food delivery and rideshare services like Uber and DoorDash.
Performance: Functional but Slow
In testing, Gemini took approximately nine minutes to complete a dinner order through Uber Eats. The AI guides itself through the interface with on-screen text narrating its actions: "Selecting a second portion of Chicken Teriyaki for the combo." Users can watch the automation in real-time or let it run in the background.
The automation sometimes struggles with interface elements—searching for a side menu item visible on the screen while missing it initially—but generally completes tasks with high accuracy. When failures occur, they typically happen within the first two minutes when the app requires explicit permissions like location access.
How It Actually Works
Gemini is designed to complete tasks up to the final confirmation step, requiring user approval before submitting orders. This safeguard has proven effective; testers report Gemini has never autonomously completed an order in test runs.
The system's most impressive capability: using calendar and email access to intelligently schedule transportation. When given a vague prompt to "schedule an Uber to the airport in time for tomorrow's flight," Gemini accessed calendar entries, identified the 1:45 PM departure time, and suggested appropriate departure times (11:30-11:45 AM) for an airport near the user's home. The ride was scheduled in three minutes with minimal input.
Limitations and Design Implications
Gemini's struggle with human-centric interfaces highlights a fundamental problem: apps are designed for human eyes and behavior patterns, not AI reasoning. The AI won't be swayed by promotional banners or high-quality food photography—it processes everything as interface elements to parse.
Google Head of Android Sameer Samat confirmed Gemini uses this "reasoning approach" because developers haven't yet adopted more robust alternatives like Model Context Protocol (MCP) or Android app functions. This current iteration appears to be a bridge solution, demonstrating feasibility while incentivizing developers to build AI-optimized interfaces.
What This Means
Gemini's task automation represents genuine progress in practical AI application control, despite its speed limitations. The nine-minute order time is acceptable for background execution while users attend to other tasks. The real significance lies not in current performance, but in establishing proof-of-concept for autonomous app navigation. This will likely accelerate industry adoption of standardized AI integration methods (MCP, app functions) that could ultimately make this process faster and more reliable. The feature remains too slow for time-sensitive tasks but demonstrates a viable path toward AI assistants that actually do work on your phone—not just talk about it in marketing demos.
Related Articles
Google tests Remy AI agent internally, designed to act autonomously across Gemini services
Google is testing Remy, an AI personal agent for Gemini that can take actions on users' behalf across Google services, according to Business Insider. The tool is currently in employee-only testing with no confirmed public release date.
Google preps Gemini agent for macOS to control computers and organize files, challenging Claude Cowork
Google is developing a Gemini agent for macOS that will control computers, organize files, and integrate with Google Workspace apps. Code analysis reveals features including file conversion to Google Sheets, folder organization, batch file renaming, and meeting follow-up automation.
Google Home upgrades to Gemini 3.1 for multi-step voice commands
Google has upgraded its Home smart assistant to Gemini 3.1, enabling users to combine multiple tasks in a single voice command. The update follows last month's improvements to natural language understanding and comes after reports of accuracy issues in the smart home platform.
Google Docs adds persistent instructions for Gemini, capped at 1,000 per account
Google Docs now allows users to set persistent instructions for Gemini that apply across all documents. The feature, available to Google AI Plus subscribers in the US, supports up to 1,000 active instructions per account for controlling tone, style, and formatting.
Comments
Loading...