CNN
—
The very basic meaning of what it means to ” Google” something – typing in keywords and sifting through links, images and information snippets is nearly behind us, according to Google.
The search giant laid out its vision for the future of searching the web on Tuesday, introducing a flurry of updates that aim to shift Google’s ubiquitous search engine from being a box for processing keywords to a system of “digital agents” that can crawl the web and answer questions based on a person’s real-world surroundings, tastes and preferences. Google’s AI push comes as publishers – particularly independent ones – have already raised concerns about how the prominence of AI-generated answers could threaten their businesses.
The announcements, made during the company’s annual developer conference, underscore that Google’s most important business is facing more competition than ever. Chatbots like ChatGPT and AI-fueled search engines such as Perplexity present an alternative way to find information and get things done – two tasks firmly at the center of Google’s core business. The newly announced tools can be seen as an effort to prove its nearly 30-year-old search engine isn’t losing relevance in the AI era.
“What all this progress tells me is that we are now entering a new phase of the AI platform shift, where decades of research are now becoming reality for people, businesses and communities all over the world,” Sundar Pichai, CEO of Google and its parent company Alphabet, said in a press briefing ahead of the conference.
Google is broadening AI Mode, previously only available to those who signed up to test early features through its Labs program, to all US users through the Google app. It’s a step beyond AI Overviews, the AI-generated answers consumers see at the top of results.
The primary difference between AI Mode and a standard Google search is the way it processes queries. Instead of just looking at the whole question, AI Mode breaks queries down into subtopics and generates additional searches based on those subtopics to provide a more specific answer. Google says AI Mode will soon draw on a person’s search history to further personalize answers, and users will also be able to link it to other Google apps, like Gmail.
Beyond how it processes questions, AI Mode is expected to offer two key new ways of searching: one that it claims will handle tasks on a user’s behalf, and another that lets them show Google their surroundings using their phone’s camera. Although AI Mode is now generally available in the US, these two specific features will still require users to sign up for Labs.
Google’s Project Mariner technology, which the company announced as a research prototype last year, will be able to accomplish certain tasks on a person’s behalf and answer questions that usually require multiple steps, the company claims. For example, one could ask a question such as “Find two affordable tickets for this Sunday’s Reds game in the lower level,” and Google will search for tickets, analyze options and pricing, fill out forms autonomously and then pull up tickets that match the user’s criteria.
It will initially be available for buying tickets, making restaurant reservations and booking local appointments through services such as Ticketmaster, StubHub, Resy and Vagaro, and will come to the Labs section of the Google app in the coming months.
AI Mode in the Google app is also getting a new feature that lets users ask questions about the world around them. Visual search isn’t new to Google; the company’s Lens tool already lets users ask questions about photos they’ve snapped.
But this mode takes that idea a step further by showing Google what a person is seeing in real time. The idea is to make it easier for Google to answer questions about complex tasks that are difficult to describe – such as whether the specific bolt in the toolbox is the right size for the bike frame being fixed – just by pointing a phone at it and asking.
Google previously brought this visual search functionality to its Gemini assistant on Android, which it’s now expanding to the iPhone. But Tuesday’s announcement shows Google sees it as being key to the future of its search engine as well.
Some of the new search capabilities overlap with those available in Google’s Gemini assistant, potentially causing confusion among consumers. Robby Stein, vice president of product for Google search, told CNN that search is tailored for learning, while Gemini is meant to be a helper for tasks like generating code and writing business plans in addition to answering questions.
Google’s search engine has been the primary vessel for finding information online for nearly three decades. But that position is being challenged more than ever due to the proliferation of AI services from companies like OpenAI and Perplexity, as well as fellow tech stalwarts like Apple, Amazon and Microsoft – all of which have upgraded or are in the process of upgrading their virtual assistants with advanced AI capabilities. OpenAI, Google’s chief rival in the AI assistant space, has launched its own search engine.
Google’s increased competition became evident earlier this month when Eddy Cue, Apple’s senior vice president of services, revealed in courtroom testimony that Google searches in its Safari browser in April had decreased for the first time since 2002, Bloomberg reported. Google has pushed back on that statement, saying it’s seen “overall query growth in search” including those coming from Apple devices. Market research firm Gartner estimated last year that search engine volume would drop 25% by 2026 as consumers gravitate toward AI tools.
But Pichai, on a call with reporters, said the updates reflect the new ways people are using his company’s search engine.
“When I look ahead, you’ve got glimpses of a proactive world, an agentic world,” he said. “All of this will keep getting better.”