Google continues to take steps in a long march towards pervasive visual search. The latest came at its annual “Made by Google” event yesterday, in which Google Lens was included in an AI-themed procession of product announcements.
Specifically, Lens will be available natively in the new Pixel 3. Previously, it was available inside camera apps in Pixel 2 and a handful of phones that Google partnered with directly. Essentially, Lens’s on-deck integration makes it more accessible, in order to condition user behavior.
“Lens is now able to work instantly in the camera and on-device for some of the most common actions,” said Google VP of product management Brian Rakowski. “Point it at a take-out menu and lens will pull out the phone number to call. Point it at a movie poster to click through to the URL.”
This is Google’s standard move of leading by example in its own flagship hardware. More importantly, the focus is narrowing to practical and commerce-oriented searches. That includes product/style use cases similar to those demonstrated recently by Facebook and Snapchat.
“So now, when your friend is wearing a cool new pair of sunglasses, or you see some shoes you like in a magazine, you can use Lens to find them online and browse similar styles,” said Rakowski from the stage. This style example is becoming a commonly-cited and sexy AR use case.
Product-oriented visual search also aligns with Google’s mission and core revenue model. It brings that intrinsic search need to a new, and in some cases more intuitive, visual format. As we like to say, the camera is the new search box and physical items are the new search “terms.”
“Being able to search the world around you is the next logical step in organizing the world’s information and making it more useful for people,” said Rakowski on stage. “And we’ve been doing that in some fun new ways, like with Style Search in Lens.”
In addition to alignment with Google’s mission and revenue model, it also lines up with its capabilities. Visual search will rely heavily on computer vision and object recognition, where Google has a head start with its vast image database, knowledge graph and overall search chops.
“This is only possible by combining Pixel’s visual core with our years of work in search and computer vision,” said Rakowski on stage. “It’s one more way we’re integrating our AI, software, and hardware to create the best end-to-end camera experience.”
For Google, the remaining question is monetization dynamics. How will visual search, and the ways user behavior develops around it, open up to paid results. That brings in search unit economics such as click-through-rates, cost-per-click and the interplay of organic vs paid results.
ARtillry Intelligence recently projected AR advertising to grow to $2.6 billion by 2022. That’s mostly social AR lenses today, but will shift to visual search like Google Lens over time. Though more complex, visual search will be a valuable, high-intent ad medium — just like search is today.
But the first step is conditioning user behavior, and Google is well-positioned relative to others investing in different flavors of visual search. Those companies so far include social players like Facebook and Snapchat, and commerce-oriented companies like Amazon and Pinterest.
It’s also notable how each of these companies is approaching visual search slightly differently. Those approaches are a function of the respective motivations and capabilities of each player — Google for search, Amazon for commerce, Facebook for social interaction, and so on…
We examine each of these players and their AR motivations in the latest ARtillry Intelligence Briefing (preview here). We’ll also continue tracking these moves, which seem to be accelerating among tech giants paving the AR way. But it will still be a long road to visual search ubiquity.
Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.