Home » How Google’s project to have robots more human-like works

How Google’s project to have robots more human-like works

by admin
How Google’s project to have robots more human-like works

A robot wanders around in an office kitchen. He looks around, it seems to analyze the environment surrounding. At one point, he opens a drawer and takes a sponge: he grabs it and carries it to a table, where a group of people are waiting for him.

That robot is not like everyone else. It is the result of a Google experiment that, in collaboration with another Alphabet subsidiary, Everyday Robots, trained him to better understand the needs of human beings. What you see in the video is not that surprising in itself, but it is when you think about the original request: “Will you help me clean up this mess?”.

Future

The technology of iCub 3, the IIT robot halfway between metaverse and tele-existence

by Emanuele Capone


Artificial intelligence and robots: PaLM-Saycan

Robots have been widely used in the world for years, especially in industrial contexts. In 2021 alone, demand grew by 15% in Europe and by 50% in our country. In these areas, the devices that are used are what are defined as closed machines, that is programmed to perform predetermined tasksoften repetitive, based on very precise inputs.

The robot that you see in the video is something else. Use PaLM, one of Google’s most advanced Large Language Models. It is about a conversational artificial intelligence which allows him to understand and process the textual requests that come from human beings. In short, that robot has an AI inside that, trained on the basis of billions of texts, has learned to understand how human beings speakhas understood what their reasoning is and is therefore able to respond to their needs.

See also  6th Future Health Congress on March 6th and 7th, 2024 in Wiesbaden

The point is that this is not enough. It may be enough, for example, for a chatbot (like Blenderbot, that of Meta) or for a virtual assistant. When it comes to robots, there is one more element to take into consideration: corporeality, the existence within a three-dimensional space. For this reason, Google has combined the characteristics of the Everyday Robots machines with PaLM, in particular the ability to analyze the environment and learn from experience. It is from this mix that PaLM-Saycan is bornwhich Mountain View intends to “improve the overall performance of the robot and the ability to perform more complex and abstract tasks by drawing on the knowledge of the world encoded in the linguistic model”.

It sounds complicated, but it isn’t. Let’s imagine we have, for a moment, one of the robots from Google and from ask him something like “I want to take a break, can you bring me something to drink and a snack?”. To the request of the human being corresponds, as a first step, a linguistic elaboration, that is the understanding of the question and of the context. Then, one solution: the machine decides that the most suitable answer to that request is water and an apple.

At that point, however, the surrounding environment enters the scene. PaLM-Saycan matches linguistic interpretation what is called the affordance score, a score based on the chances of performing that action in that environment. From the combination of these two evaluations, a credible and feasible solution emerges, which allows the robot to respond to human request (Google has put a mini-site is available where you can experiment with these small everyday scenarios).

See also  The three major Chinese city leagues of the "Sports Cup" will be fully launched-Entertainment Grand View-Market Information Network

The analysis

Is it possible to fall in love with an artificial intelligence?

by Francesco Marino


What’s missing

The road to an almost human interaction with robots, however, is still long, as explained by Google: “Whether it is move to crowded offices oh you understand commonplaces, we still have many challenges to solve in robotics. For now, these robots are getting better at getting snacks for googlers in our kitchens ”.

The challenge, in short, is the insertion of the model into ever wider contexts. For the moment, from Mountain View they talk about a research project, with no prospect of marketing. It will take some time before a robot can actually bring us a snack.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy