It’s just been 23 years since Google first went online, and now the world‘s most used search engine introduces a radical change in the way it works. Earlier this year, at the annual I / O developer conference, the company unveiled the Multitask Unified Model, or MUM: it’s an algorithm that can simultaneously search for information in a wide range of formats, including text, images. and videos, and draw insights and connections between topics, concepts and ideas. MUM’s first practical applications will arrive shortly, starting with Google Lens.
Google Lens is the image recognition technology that allows you to use the smartphone camera for various operations, such as real-time translation, identification of plants and animals, copying and pasting from photos, finding objects similar to what is framed , get help with math problems, and more.
interview
“Google’s algorithm changed 4887 times last year”
by Bruno Ruffilli
Google will leverage MUM’s capabilities to update Google Lens with the ability to add text to visual searches to allow users to ask questions about what they see; by combining images and text, it will be possible to have more precise and more useful results. For example, let’s say that a part of our bicycle breaks and we need to know how to fix it: it will no longer be necessary to search for “how to repair the hub”, but just frame the damaged component and type “how to repair”. As long as the name of the thing is known, it changes little, but not everyone knows the exact term, and this is where image recognition comes into play. And the novelty is that the answer will not come only in text form, but also in images and videos, and in the latter case the result will indicate the precise point of the video where there is the solution to our problem.
This function will be available at first on YouTube (which, remember, is owned by Mountain View), then also on other video platforms. At the moment, Liz Reid, vice president of the Search department explains to Italian Tech, “MUM does not derive information from the contents of the individual frames of a video, but from audio transcripts automatically generated using voice recognition”.
The new algorithm is a thousand times more powerful than the previous one, which was called Bert: there are a thousand nodes of the neural network on which it is designed, with the idea of recreating the synapses of the human brain, where each node is a decision point through the algorithm passes the user’s search to. “MUM – adds Reid – knows a lot more about the world”, so it can provide more useful and more precise results. Language differences are also not a barrier, as it is currently compatible with 75 languages.
But at Google they go even further, and they rely heavily on inspiration. As if the “I feel lucky” button that disappeared from the search engine homepage a few years ago had been incorporated into the algorithm. So searches can be narrowed or expanded, a bit like zooming in on a camera. And on a topic like acrylic paints you can really know everything: from the chemical composition to the artists who have used them most often, from how to remove stains on clothes to advice for color combinations.
Users will be able to perform reverse image searches when browsing the Google iOS app or Chrome desktop browser. Selecting an image will bring out similar online images, which could help buyers find where to buy the items seen in the photos and ultimately lead them to Google Shopping (and therefore in the future divert traffic that might go to Amazon).
It’s called Mum
Google uses a new algorithm to improve research on vaccines and Covid
.