Google Branches Out To ‘Multisearch’ Where You Can Input Text + | PC Consulting Asia
PCCA

Google Branches Out To ‘Multisearch’ Where You Can Input Text + Images At Once - Corporate B2B Sales & Digital Marketing Agency in Cardiff covering UK

Screenshot via Google

Most everyone can agree that Google’s search tool is a powerful asset for their daily lives. Now, the tech giant is taking things further by bringing the ability to make queries visually. Specifically, the feature—called ‘multisearch’—lets you enter both images and text to prompt more precise results.

Picture this: You’re at a café and have just spotted a hanging lamp that you really like. You proceed to take a photo and upload it to Google to search for something similar.

When Google returns with the results, it shows you the very same lamp. It looks great, but you consider how it would appear in your living room. I’d prefer for it to be bronze instead of white, you think to yourself. So you enter “bronze” in the search box—and there it is.

The new multisearch feature, first teased in September last year, allows users to explore objects in front of them and narrow searches down to color, brand, or visual qualities.

Image via Google

For now, it runs as a beta option for users on the Google mobile app based in the United States. Google should officially launch the tool in the coming months after it’s concluded its tests.

To use multisearch, Android or iOS users will first need to tap the camera icon at the end of the search bar, before uploading an image. After which, they can add text by tapping on the plus sign.

Video via Google

Some applications that the search engine has cited for multisearch include capturing a photo of a dining set and looking up “coffee table” to find a similar table, as well as snapping a photo of your rosemary plant and requesting for “care instructions.”

Multisearch is powered by Google’s new Search artificial intelligence technology, the Multitask Unified Model—or MUM for short. MUM multitasks (as mothers do), simultaneously recognizing natural language and dissecting pictures for relevant areas.

Although the AI model is trained to understand 75 languages, the beta version currently only accommodates searches in English. Liz Reid, vice president of Google Search, also tells CNN Business that multisearch will be optimized for shopping-based searches first before expanding support in the future. Clearly, the function only scratches the surface in terms of its potential.

“In the future, [MUM] can expand to more modalities like video and audio,” Google shared in an explainer published last May.

Click to view enlarged version

Click to view enlarged version. Image via Google

[via

http://www.designtaxi.com/news/418339/Google-Branches-Out-To-Multisearch-Where-You-Can-Input-Text-Images-At-Once/

This content was originally published here.