Microsoft announced that it will enhance its products with visual search functions similar to Google Lens.
It uses artificial vision algorithms trained with large sets of images so that they can identify specific details of the photographs. Thanks to this dynamic and deep learning potential, you can recognize very specific data, for example, the breed of a dog or identify a flower.
As the Microsoft team says, the visual search function will be integrated into Bing apps, giving users the ability to find related content from an image. This dynamic will save them a couple of clicks and require them to specify keywords to customize their search.
For example, a user can take a picture of a flower and use Bing’s visual search function from his mobile device, to get more details (name, crop, origin, etc.).
So the dynamic is simple, the user just needs to take a picture or use an existing one to test the performance we are talking about. If there are several elements within an image, you can select the object you want to search for.
At the moment, these intelligent visual search features are available only to U.S. users using Bing’s iOS and Android apps and Microsoft Edge for Android.