product update

Google launches Search Live globally with real-time camera and voice search

TL;DR

Google is expanding Search Live globally to users in more than 200 countries, enabling real-time voice and camera search through the Google app and Lens. The feature, powered by Gemini 3.1 Flash Live—a new multilingual audio and video model—allows users to point their phone camera at objects and ask questions with instant spoken responses.

2 min read
0

Google Expands Search Live to 200+ Countries

Google is rolling out its Search Live feature globally, making real-time voice and camera-based search available to users in more than 200 countries. The feature is now accessible through the Google app on both Android and iOS, as well as through Google Lens.

How Search Live Works

Search Live enables two primary interaction modes:

Voice Search: Users can ask questions aloud and receive spoken answers paired with relevant web links.

Camera Search: With the camera active, users can point their phone at physical objects and ask contextual questions. Google cites assembling furniture as an example use case—a user could point their camera at a shelf and ask for assembly instructions.

Powered by Gemini 3.1 Flash Live

The feature runs on Google's new Gemini 3.1 Flash Live model, a multilingual audio and voice model designed to enable more natural conversational interactions. The model processes real-time audio and visual input, eliminating the need for users to photograph objects and wait for processing—searches occur while the camera is actively pointed at the subject.

Specific details about the model's capabilities, latency, or technical specifications have not yet been disclosed by Google.

Integration and Accessibility

Search Live is integrated into the AI mode within the Google app, positioning it as a core search interface rather than a standalone feature. The global rollout marks a significant expansion from previous limited availability, though Google has not specified whether all regions receive identical functionality or if certain features vary by location.

What This Means

Google is attempting to shift mobile search from text-based queries to multimodal interactions where users can leverage their device's camera and microphone as primary input methods. This represents a strategic response to AI-powered search competition and aligns with Google's broader efforts to integrate Gemini capabilities across its product suite. The real-time processing requirement suggests significant advances in inference speed—Search Live must provide responses while the camera remains pointed at objects, not after capturing static images. The feature's availability across 200+ countries indicates Google's confidence in Gemini 3.1 Flash Live's multilingual capabilities, though performance variations across languages and regions remain unknown.

Related Articles

product update

Google embeds Gemini across Android as multimodal agent before Apple's WWDC AI reveal

Google is rolling out Gemini-powered features across Android that enable the AI to understand screen context and complete multi-step tasks across apps, marking a shift from traditional assistant interactions to agentic capabilities. The updates will launch on Samsung Galaxy and Google Pixel phones this summer before expanding to watches, cars, and laptops.

product update

Google launches Gemini Intelligence for Android, enabling multi-app task automation

Google announced Gemini Intelligence at I/O 2026, a system-level AI layer that automates multi-step tasks across Android apps. Rolling out first to Samsung Galaxy and Pixel phones this summer, it enables the OS to understand screen context and execute complex workflows without manual app-switching.

product update

Google announces Googlebooks laptop platform with Gemini AI integration, launching fall 2026

Google previewed Googlebooks, a new laptop platform combining Android and ChromeOS with Gemini AI at its core. The platform features AI capabilities like Magic Pointer for contextual assistance and seamless Android phone integration. Hardware partners include Acer, Asus, Dell, HP, and Lenovo, with devices launching fall 2026.

product update

Microsoft Edge adds Copilot feature to analyze content across all open browser tabs

Microsoft is updating Edge to let Copilot read and analyze content across all open browser tabs simultaneously. The update includes AI-generated podcasts from tabs, study mode with quizzes, and long-term conversation memory.

Comments

Loading...