Home » Google adopts a “Robot Constitution” inspired by Asimov’s famous laws

Google adopts a “Robot Constitution” inspired by Asimov’s famous laws

by admin
Google adopts a “Robot Constitution” inspired by Asimov’s famous laws

Listen to the audio version of the article

DeepMind presents a series of innovations for the creation of robots that can integrate into everyday life, also taking inspiration from the three laws of robotics
Isaac Asimov’s precepts of robotics respected the need for safety (first law), service (second law), and self-preservation (third law) of machines equipped with artificial intelligence. “Three laws” which, in the science fiction universe created by Asimov, represent the foundations of their behavior and interactions with man. It is precisely by drawing inspiration (also) from these three laws that the Google Deep Mind robotics team thought of creating a sort of “Robot Constitution”, described as a series of safety-focused suggestions that instruct the language model to avoid carry out tasks that could compromise the safety of humans and animals.
The “Robot Constitution” is just one part – perhaps the most suggestive – of important technological updates presented by DeepMind on the official blog. These represent a fundamental piece for the creation of robots capable of codifying practical objectives that are typically “human,” in order to make rapid, better and safer decisions in all the activities entrusted to them.
DeepMind is following this path thanks to the experimentation of the AutoRT system, capable of exploiting large artificial intelligence models, integrating Large Language Models (LLM) and Visual Language Models (VLM). This system creates a system capable of simultaneously deploying and directing a series of robots, also equipped with cameras and actuators, to collect training data in new environments. Each robot uses a VLM to understand its surroundings and an LLM to tackle any complex tasks, such as “putting coffee on the desk.”

See also  Snapdragon 7 Series, premium experiences for everyone

The experimentation

“Before robots can be integrated into our daily lives, they must be developed responsibly with solid research demonstrating their safety in the real world,” writes in the official communication DeepMind, which has programmed the robots to ensure robust safety measures. safety already widely spread and used in large manufacturing companies, such as the automatic blocking of robots if the force on their joints exceeds a certain value. Furthermore, in the experiment conducted by the Google team, all the robots were kept in visual contact with a human supervisor equipped with an off switch. An extensive experiment that involved the researchers for a period of seven months, in which the system proved capable of safely coordinating up to 20 robots simultaneously and up to 52 unique robots in total, in a variety of scenarios in the company’s offices, collecting a diverse dataset composed of 77 thousand robotic tests in over 6600 unique tasks The robots used in the experimentation are not humanoids, but robotic arms positioned on a mobile base. For each robot, the system uses the camera to understand the surrounding environment (VLM) and then, thanks to the LLMs, suggests an appropriate list of tasks that the robot can perform.

The other technologies tested by Google

The second innovation presented by the DeepMind team is called Self-Adaptive Robust Attention for Robotics Transformers, or SARA-RT, which makes the current Robotics Transformer models that Google uses in the latest robotic control systems more efficient (an increase in accuracy of 10, 6% and a 14% increase in speed). With SARA-RT, DeepMind researchers were able to use a new tuning method, called “up-training”, which manages to convert quadratic complexity into linear complexity, increasing the speed of the original model while preserving, at the same time, at the same time, quality. Finally, DeepMind researchers presented RT-Trajectory, a model that adds two-dimensional visual contours describing the robot’s movements and appendages into training videos. The model takes each video in a training dataset and overlays it with, for example, the trajectory of the robotic arm as it cleans a countertop. An intuitive activity for a human being but which a robot could translate in many different ways.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy