Google Gemini task automation now works on phones, taking 9 minutes to order dinner
Google has launched task automation for Gemini on Pixel 10 Pro and Galaxy S26 Ultra, allowing the AI to autonomously use apps for food delivery and rideshare services. The feature works but is slow—taking approximately nine minutes to complete an order—and remains limited to a small beta subset of apps. Despite performance limitations, it represents the first practical demonstration of an AI assistant actually controlling a phone outside of controlled demos.
Google Gemini Task Automation Now Works on Phones—Slowly
Google has rolled out task automation for Gemini on Pixel 10 Pro and Galaxy S26 Ultra, marking the first real-world deployment of an AI assistant that can actually use apps on your phone. The feature is currently in beta and limited to food delivery and rideshare services like Uber and DoorDash.
Performance: Functional but Slow
In testing, Gemini took approximately nine minutes to complete a dinner order through Uber Eats. The AI guides itself through the interface with on-screen text narrating its actions: "Selecting a second portion of Chicken Teriyaki for the combo." Users can watch the automation in real-time or let it run in the background.
The automation sometimes struggles with interface elements—searching for a side menu item visible on the screen while missing it initially—but generally completes tasks with high accuracy. When failures occur, they typically happen within the first two minutes when the app requires explicit permissions like location access.
How It Actually Works
Gemini is designed to complete tasks up to the final confirmation step, requiring user approval before submitting orders. This safeguard has proven effective; testers report Gemini has never autonomously completed an order in test runs.
The system's most impressive capability: using calendar and email access to intelligently schedule transportation. When given a vague prompt to "schedule an Uber to the airport in time for tomorrow's flight," Gemini accessed calendar entries, identified the 1:45 PM departure time, and suggested appropriate departure times (11:30-11:45 AM) for an airport near the user's home. The ride was scheduled in three minutes with minimal input.
Limitations and Design Implications
Gemini's struggle with human-centric interfaces highlights a fundamental problem: apps are designed for human eyes and behavior patterns, not AI reasoning. The AI won't be swayed by promotional banners or high-quality food photography—it processes everything as interface elements to parse.
Google Head of Android Sameer Samat confirmed Gemini uses this "reasoning approach" because developers haven't yet adopted more robust alternatives like Model Context Protocol (MCP) or Android app functions. This current iteration appears to be a bridge solution, demonstrating feasibility while incentivizing developers to build AI-optimized interfaces.
What This Means
Gemini's task automation represents genuine progress in practical AI application control, despite its speed limitations. The nine-minute order time is acceptable for background execution while users attend to other tasks. The real significance lies not in current performance, but in establishing proof-of-concept for autonomous app navigation. This will likely accelerate industry adoption of standardized AI integration methods (MCP, app functions) that could ultimately make this process faster and more reliable. The feature remains too slow for time-sensitive tasks but demonstrates a viable path toward AI assistants that actually do work on your phone—not just talk about it in marketing demos.
Related Articles
Google AI Studio adds real-time multiplayer game coding with Gemini 3.1 Pro
Google has launched a vibe coding feature in Google AI Studio that converts natural language descriptions into working applications using Gemini 3.1 Pro. The platform now supports real-time multiplayer games and automatically configures databases, authentication, and third-party service integrations through an "Antigravity Agent."
Google expands Universal Commerce Protocol with cart, catalog, and loyalty features for AI agents
Google has expanded the Universal Commerce Protocol (UCP) with shopping cart, catalog, and identity features designed for AI agents. The new capabilities enable agents to add multiple items to carts, access real-time product data including prices and availability, and preserve shopper loyalty benefits across retailers.
Google testing Gemini app for macOS with Desktop Intelligence feature
Google is testing a native Gemini app for macOS, according to Bloomberg. The app would compete directly with OpenAI's ChatGPT and Anthropic's Claude, both of which offer standalone Mac applications. A key differentiator is 'Desktop Intelligence,' which allows Gemini to view screen context and pull content from apps to personalize responses.
Google begins beta testing dedicated Gemini app for macOS
Google has begun beta testing a dedicated Gemini app for macOS with select users, according to Bloomberg. The early version includes only critical features and hints at a "Desktop Intelligence" capability that lets Gemini see screen context. The move addresses a competitive gap, as Anthropic and OpenAI already offer native Mac apps for Claude and ChatGPT.