Google’s upcoming future Google Lens will let smartphone cameras understand what they see and take action

At Google’s I/O developer conference, CEO Sundar Pichai announced a new technology called Google Lens. The idea with the product is to leverage Google’s computer vision and AI technology in order to bring smarts directly to your phone’s camera. As the company explains, the smartphone camera won’t just see what you see, but will also understand what you see to help you take action.

A few things Lens can do:

  • Tell you what species a flower is just by viewing the flower through your phone’s camera;
  • Read a complicated Wi-Fi password through your phone’s camera and automatically log you into the network;
  • Offer you reviews and other information about the restaurant or retail store across the street, by you just flashing your camera over the physical place.


n another example, Pichai showed how Lens could do a common task — connecting you to a home’s Wi-Fi network by snapping a photo of the sticker on the router.

In that case, Google Lens could identify that it’s looking at a network’s name and password, then offer you the option to tap a button and connect automatically.


A third example was a photo of a business’s storefront — and Google Lens could pull up the name, rating and other business listing information in a card that appeared over the photo.

The technology basically turns the camera from a passive tool that’s capturing the world around you to one that’s allowing you to interact with what’s in your camera’s viewfinder.

Later, during a Google Home demonstration, the company showed how Lens would be integrated into Google Assistant. Through a new button in the Assistant app, users will be able to launch Lens and insert a photo into the conversation with the Assistant, where it can process the data the photo contains.

To show how this could work, Google’s Scott Huffman holds his camera up to a concert marquee for a Stone Foxes show and Google Assistant pulls up info on ticket sales. “Add this to my calendar,” he says — and it does.

The integration of Lens into Assistant can also help with translations.

Huffman demonstrates this by holding up his camera to a sign in Japanese, tapping the Lens icon and saying “What does this say?” Google Assistant then translates the text.


Lens was a favorite of several Google I/O attendees for its clear utility. It’s the kind of feature that could make the apps that contain it more uniquely useful. One developer commented to me it was “the first time AI is more than a gimmick.”

Typing a question into Assistant, for example, can feel like just using Google Search in a separate window. Add this new computer vision capability, though, and you have something a browser search box can’t do.

Lens brings Google’s use of AI into the physical world. It effectively acts as a search box, and shows Google’s adaption to the move amongst younger users toward visual media. That preference has made social network Snap a magnet for younger users, who prefer to communicate with pictures over text.

Lens affirms a consistency of focus for Google. Here is augmented reality at work doing exactly what people know Google can do, which is retrieve information from the web.

But in considering how this new visual search option may play out, compare it to voice search, which for now often returns read-outs of whatever would appear at the top of search, sometimes resulting in answers that are inaccurate, offensive or lacking in context.

Google also cleverly incorporated Lens into one of the company’s most-used apps, Photos, which has gained half a billion users in the two years since its launch. The incorporation could help Google become more essential to mobile users by making its mobile apps more essential. That means Google will have a place on users’ phones even if its own hardware like the Pixel phone fails to catch on.

Pichai said in his founders’ letter a year ago that part of this shift to being an AI first company meant computing would be less device-centric. Lens is an example of being less device-centric, on mobile.

The technology behind Lens is essentially nothing new, and that also tells us something about where Google is going. This is not to say that Google is done coming up with new technologies, but that there are a lot of capabilities the company is still putting together into useful products.

Leave a comment

Your email address will not be published. Required fields are marked *