product update

Google Gemini task automation now works on phones, taking 9 minutes to order dinner

TL;DR

Google has launched task automation for Gemini on Pixel 10 Pro and Galaxy S26 Ultra, allowing the AI to autonomously use apps for food delivery and rideshare services. The feature works but is slow—taking approximately nine minutes to complete an order—and remains limited to a small beta subset of apps. Despite performance limitations, it represents the first practical demonstration of an AI assistant actually controlling a phone outside of controlled demos.

2 min read
0

Google Gemini Task Automation Now Works on Phones—Slowly

Google has rolled out task automation for Gemini on Pixel 10 Pro and Galaxy S26 Ultra, marking the first real-world deployment of an AI assistant that can actually use apps on your phone. The feature is currently in beta and limited to food delivery and rideshare services like Uber and DoorDash.

Performance: Functional but Slow

In testing, Gemini took approximately nine minutes to complete a dinner order through Uber Eats. The AI guides itself through the interface with on-screen text narrating its actions: "Selecting a second portion of Chicken Teriyaki for the combo." Users can watch the automation in real-time or let it run in the background.

The automation sometimes struggles with interface elements—searching for a side menu item visible on the screen while missing it initially—but generally completes tasks with high accuracy. When failures occur, they typically happen within the first two minutes when the app requires explicit permissions like location access.

How It Actually Works

Gemini is designed to complete tasks up to the final confirmation step, requiring user approval before submitting orders. This safeguard has proven effective; testers report Gemini has never autonomously completed an order in test runs.

The system's most impressive capability: using calendar and email access to intelligently schedule transportation. When given a vague prompt to "schedule an Uber to the airport in time for tomorrow's flight," Gemini accessed calendar entries, identified the 1:45 PM departure time, and suggested appropriate departure times (11:30-11:45 AM) for an airport near the user's home. The ride was scheduled in three minutes with minimal input.

Limitations and Design Implications

Gemini's struggle with human-centric interfaces highlights a fundamental problem: apps are designed for human eyes and behavior patterns, not AI reasoning. The AI won't be swayed by promotional banners or high-quality food photography—it processes everything as interface elements to parse.

Google Head of Android Sameer Samat confirmed Gemini uses this "reasoning approach" because developers haven't yet adopted more robust alternatives like Model Context Protocol (MCP) or Android app functions. This current iteration appears to be a bridge solution, demonstrating feasibility while incentivizing developers to build AI-optimized interfaces.

What This Means

Gemini's task automation represents genuine progress in practical AI application control, despite its speed limitations. The nine-minute order time is acceptable for background execution while users attend to other tasks. The real significance lies not in current performance, but in establishing proof-of-concept for autonomous app navigation. This will likely accelerate industry adoption of standardized AI integration methods (MCP, app functions) that could ultimately make this process faster and more reliable. The feature remains too slow for time-sensitive tasks but demonstrates a viable path toward AI assistants that actually do work on your phone—not just talk about it in marketing demos.

Related Articles

Comments

Loading...