A new tool published by Google analyzes your images. The machine learning/artificial intelligence algorithm tells you what it thinks the image is relevant for.
This tool demonstrates Google’s AI and Machine Learning algorithms for understanding images. It’s a part of Google’s Cloud Vision products.
Does Cloud Vision Tool Reflect Google’s Algorithm?
Most tools and search commands that Google offers have historically not reflected the algorithm that Google uses for rankings. So it’s likely this tool does not offer a glimpse into how Google ranks images.
However, it is a great tool for understanding how Google’s AI and Machine Learning algorithms can understand your image.
This information can be used to improve your image so that it accurately reflects the topic of your web page.
What is the Google Image Tool?
The tool is a way to demo Google’s Cloud Vision API. Cloud Vision API is a cloud service that can allow you to add image analysis features to apps and websites.
The tool itself allows you to upload an image and it tells you how Google’s machine learning algorithm interprets it.
These are seven ways Google’s image analysis tools classifies uploaded images:
The “faces” tab provides an analysis of the emotion expressed by the image. The accuracy of this result is questionable.
As you can see below, John Mueller is clearly smiling in the image but Google’s image analysis tool didn’t catch it.
The “objects” tab shows what objects are in the image, like glasses, person, etc. This works very well.
The “labels” tab shows details about the image that Google recognizes, like ears and mouth but also conceptual aspects like portrait and photography.
Web Entities
This shows descriptive words that are associated with the image via the web. In the John Mueller image, a Taiwanese site copied this image from Search Engine Journal. This resulted in Chinese related meanings to be assigned to the meaning of the image.
In the above image, there are references to DuckDuckGo and Google China. This is because a Taiwanese site copied the original image and used it in their article that had DuckDuckGo as part of the topic. The web content is reflected in this part of the image analysis.
If Google uses the web to understand what a particular image means, I don’t believe that scraper sites can influence the meaning. Google uses more than one criteria for understanding an image.
I find the Web Entities tab to be of particular interest. It shows how Google itself is interpreting what the image means by what is published online with that image.
Safe Search
Safe search shows how the image ranks for unsafe content. The descriptions of potentially unsafe images are as follows:
Here’s another example:
This is example demonstrates Google’s ability to read text:
Google uses image captions, alt text, file name and the text surrounding the image in order to understand the image and use it for ranking purposes.
Google hasn’t revealed if they use text on images for ranking purposes. As you can see above, Google has the ability (through Optical Character Recognition), to read words in images.
This is an interesting diagnostic tool for getting an idea of how Google might possibly understand your images. It could also give you a hint if maybe you might need to optimize the image better.
Upload an image and see what Google thinks about your image:
https://cloud.google.com/vision/docs/drag-and-drop
Images by Shutterstock, Modified by Author
Screenshots by Author, Modified by Author
This content was originally published here.