Home » The future of robots according to Intel

The future of robots according to Intel

by admin
The future of robots according to Intel

Despite the extraordinary results obtained in the last ten years, image recognition carried out through deep learning (the method now practically synonymous with artificial intelligence) has several limitations. The enormous amount of data necessary for learning, the inability to recognize that an object seen from different points is always the same, catastrophic forgetting (the “catastrophic forgetfulness” that prevents a neural network from learning something new without erasing previously learned) and the enormous energy consumption required for the training phase are some of them.

History

The robot dog, which once danced, now shoots and there is nothing to worry about

by Pier Luigi Pisa


Limits that make the deep learning unsuitable for situations where it is necessary to adapt quickly to changing contexts and where great flexibility is required, as can be the case of robots that have to interact with humans in the health sector, elderly care or in warehouses.

The chess robot who broke a child’s finger doesn’t even know he did

by Riccardo Luna


This is where the new approach developed in the Intel Labs, also based on neural networks and called “interactive and continuous learning”. Presented in an academic paper and developed in collaboration with the Italian Institute of Technology and with the Technical University of Munichthe new method – experimented using a simulation of the iCub robot (created by the IIT of Genoa) – allows artificial intelligences to learn in a way much more similar to how we humans learn to recognize new objects and memorize them.

See also  Right to be forgotten: managing life beyond cancer

The trend

Will Italy become the country of robots? We buy more than the EU average

by Giuditta Mosca


In the simulated environment, the robot actively perceives objects by moving a video camera that performs the function of an eye: “If we put in front of the robot a series of new objects that it does not yet know, iCub is able to interact visually with them, one at a time”, explains to Italian Tech the researcher of Intel Labs and author of the paper Yulia Sandamirskaya. “When he does not recognize an object, he turns to the user for explanations. At that point, the user will give a name to the object in question, the robot will recognize it thanks to speech recognition and it will activate the neuron in charge to store the new type of object “.

Learning therefore takes place at the request of the robot that is faced with unknown objects, updating only a specific neuron of the neural network and thus avoiding the aforementioned “catastrophic forgetting”. “Using 3D objects also becomes possible to teach recognize a particular object even from different angles and distances“, continues Sandamirskaya.” Obviously, the robot learns to recognize only that particular object and not, for example, every coffee cup in the world. The task here is simpler than traditional image classification, which allows us to use less data and smaller neural networks. “

The robot can thus learn on the spot and gain greater flexibility. These results are obtained thanks to the new architecture of Intel’s experimental processors: neuromorphic chips (called Loihi) whose structure follows that of brain cells in order to create algorithms that can manage the uncertainties of the natural world. Loihi is made up of over 130,000 artificial neurons, which send information to each other using the so-called “spiking neural network”.

See also  140 cruise passengers fall ill with gastrointestinal virus - how to protect yourself

The neuromorphic processor allows for extremely low energy consumption (up to 175 times lower than a classic neural network), obtaining levels of accuracy comparable (and in some cases higher) to other methods of image recognition for which CPU or GPU are used. This aspect of energy consumption is obviously of crucial importance, if we imagine a future in which robots will become real collaborators of human beings and spread more and more. “The move to neuromorphic chips represents a step forward as significant as the one that led from the CPU to the GPU, graphics processors that have proven to be particularly suitable for learning neural networks but still require enormous power consumption,” he continues. Sandamirskaya.

In addition to sustainability, a striking element concerns the robot’s ability to interact in real time with the environment. A fundamental step forward. “Our world is so complex that I don’t think we will ever be able to capture it in all its complexity through a database. So if we want robots to be able to act in unconstrained environments, they must at least be able to adapt to change,” he explains. always Sandamirskaya.

But is it really necessary that, as is the case with iCub and many others, these robots are anthropomorphic? Won’t they be equipped with the most suitable form for the specific job they were designed for? “There may actually be better than human solutions for some precise tasks. However, there are two advantages offered by the human form. First of all it can be optimized for any type of task, because it is versatile. The human arm, for example, is multifunctional and is a great solution in many cases. The other advantage is that, if this robot has to work in an environment designed for the human being, it is useful that it has a human form “.

See also  what will change? Experts in comparison - breaking latest news

At this point, it is inevitable to imagine how long it will be before this scenario becomes reality: “It is difficult to make predictions and some further discovery will be needed,” concludes Sandamirskaya. “But it could also happen very quickly, as happened with the iPhone: after years and years of trying, the 2007 presentation suddenly changed everything. We need someone who identifies the right architecture and the best way to join all the components needed by the robots, including in terms of safety, reliability and more. By now we have a lot of experience with deep learning, we have better understood its limitations and there is also a much greater knowledge of robotics. I think the sector is ready “.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy