Home » Conscious deep learning is still a utopia

Conscious deep learning is still a utopia

by admin
Conscious deep learning is still a utopia

Until now, all deep learning algorithms have one thing in common: they can do only one thing at a time. In technical jargon, these algorithms are in fact “artificial narrow intelligence” (Ani): limited artificial intelligences, able perhaps to defeat the world chess champion but which, in order to learn to play checkers, must cancel any notion of the first game and start the training all over again.

Not being able to keep what was learned in training for a specific task – recognizing images, translating a language, recommending the next film on Netflix, etc. – for deep learning algorithms it is also impossible to use previous knowledge that could be useful for future tasks (a bit like we exploit what we learned by riding a bicycle to drive a moped).

Gato by DeepMind: is it real intelligence?

This limit of current artificial intelligences has always been considered one of the main obstacles along the road to “strong” artificial intelligence, that is, capable of competing with that of the human being. So it was, at least, until we were faced with two very high-sounding statements. The first is that of the engineer of Alphabet (Google), according to which the neural network specialized in the LaMDA language is “sentient” (and following which it has been suspended). The second comes from DeepMind, the research laboratory on artificial intelligence owned by Alphabet (Google) which has recently presented Gato: a single neural network model capable of playing old Atari video games, recognizing images, stacking bricks. guiding a real robotic arm and much more, smoothly switching between tasks.

All in all, Gato it is able to perform 604 different tasks. “Game over!”, You triumphantly declared one of the principal researchers of DeepMind, Nando de Freitasannouncing how this deep learning system represents the first step towards the conquest of breaking latest news (artificial general intelligence): the general artificial intelligences capable, like the human being, of carrying out tasks in a wide spectrum of areas which are also very different between They.

Books

Metaverse and philosophy: the five lessons of Cosimo Accoto

by Luca Indemini


At the basis of Freitas’s declarations is the idea that – once the generalist path has been taken, thanks to Gato – it is only a question of creating ever larger models and feeding them more and more data. The rest will come by itself, until it reaches human-level artificial intelligence. Although it is undeniable that Gato can move from one task to another without having to erase previous knowledge, some of the limitations of this model – highlighted by Melissa Heikkilä on the MIT Technology Review – can only cool the enthusiasm.

See also  What happens to your metabolism (and blood sugar) if you go to bed every night after this specific time

First of all, Gato is able to carry out various tasks, but with significantly lower performance than models that do only one thing. Even more important, however, is another aspect: “Gato will also be generalist, in the sense that he can do more things at the same time, but we are still at a planetary distance from ‘general’ AI that adapts to new tasks, different from those for which the model was trained, ”writes Heikkilä. In short, Gato cannot learn functions for which he has not received specific training.

I limiti del deep learning

Among the many skeptics of Gato’s achievements – and of the possibility of conquering true artificial intelligence using deep learning – the most critical is probably Gary Marcus, neuroscientist of the New York University and founder of the startup Robust AI. As already demonstrated in his analysis of GPT-3 – another deep learning model surrounded by enormous expectations – these tools are indeed capable of impressive results (such as writing an article for the Guardian, with careful cut and sew. a human editor), but in many cases they make such gross errors of logic that, explains Marcus, “if a human being made them you would think anything but intelligent.”

For example, after reading a text asking how to get a too wide table through a door, GPT-3 recommended cutting the door horizontally in half and removing the top. A bad solution to the problem. And there are hundreds of absurd examples of this kind.

It is inevitable that this happens, since these are tools capable of finding statistical correlations in an ocean of data (and which would be invisible to man), but without having an idea of ​​what they are doing and what the cause and effect relationships are. Instead of addressing these limitations, Gary Marcus wrote in a lengthy essay published by Nautilus, the constant and excessive announcements regarding the potential of deep learning have caused this sector to pass “from one passing trend to another, decade after decade, always promising the moon and only occasionally keeping promises ”.

The analysis

See also  Is Google Search Dying? No, it just changed

Is it possible to fall in love with an artificial intelligence?

by Francesco Marino



Among the various failures, one of the most surprising was that of Watson, the artificial intelligence system launched in 2011 by IBM that was supposed to replace doctors and revolutionize healthcare, and which instead completely failed in its objective, to the point of being literally sold in pieces.

In 2016, however, a father of deep learning such as Geoff Hinton had announced that “radiologists today are like the coyote that is already running in the void, but has not yet noticed that it is falling. It is quite clear that deep learning will perform much better within five years. ” Five years have passed and no radiologist has yet been replaced by artificial intelligence (which obviously does not rule out that it could happen in the future).

One could then cite the promise of autonomous cars, constantly around the corner yet still very far from materializing, the failures of predictive police and more. “Deep learning, which is basically a technique for pattern recognition,” writes Gary Marcus, “works best when all we need are coarse, ready-to-use results, where the stakes are low and the perfection of the results optional “.

An example is that of image recognition which allows us to find all the photos on our smartphone in which there are dogs. A fairly accurate tool, but which even when it makes mistakes does not cause damage of any kind. If, on the other hand, a Tesla does not recognize a Stop sign because the sign is held by a person in the middle of the road – and not by a side pole, as statistically the artificial intelligence would expect – here the risks become unacceptable. And the same happens with the chatbot trials that should intervene promptly to offer psychological support to people on the verge of suicide and who instead ended up advising a (fake) patient to get it over with.

While computational power and the amount of data used to train these tools continue to increase, making the qualitative leap to maintain expectations of artificial intelligence may not be possible. At least, not as long as only deep learning is used: a very powerful statistical tool, but incapable of a real understanding.

A hybrid artificial intelligence

So what needs to be done? According to Gary Marcus (and other experts), a promising avenue could be to create hybrid models of deep learning and symbolic artificial intelligence. The latter is “old-fashioned” artificial intelligence, to which human programmers provide all the rules necessary to complete a task, instead of letting it learn by itself by analyzing a flood of data. For example, a symbolic system play chess using the rules and combinations provided by a programmer; a deep learning system instead starts from scratch and plays millions of games, learning to play chess without ever having actually learned the rules.

See also  After hacker attack on the Meeresum in Stralsund: problems continue | > - News

Both systems have their pros and cons, but one of the main advantages of symbolic artificial intelligence is represented, Marcus always writes, “by the fact that a large part of the knowledge of the world – from recipes to history, to technology – is moment available in symbolic form. Try to build one You act without using this knowledgeand instead learning everything from scratch (as deep learning systems aim to do), it seems to be an excessive and senseless burden “.

Indeed, one of the most extraordinary recent achievements of artificial intelligence – AlphaFold2the system always of DeepMind able to predict the structure of proteins – it employs just such a hybrid model. On the other hand, why should we use only one artificial intelligence method? “The mind doesn’t work one way,” said cognitive scientists Chaz Firestone and Brian Scholl. “On the contrary, the mind is made up of different parts that function differently. Seeing a color requires a different mechanism than planning a vacation, moving a limb or feeling an emotion ”. The same could also be true for artificial intelligences, which should therefore be completed by exploiting algorithms and models of different types.

Artificial intelligence

AIHelp, the (Italian) initiative for the defense of the ethical principles of AI

by Giuditta Mosca



As Kai-fu Lee wrote in AI 2041 (being translated for Luiss University Press), “many of the challenges posed by artificial intelligence we have not yet understood them or at least we haven’t made much progress. For example, we don’t know how to model creativity, strategic thinking, reasoning, counterfactual thinking, emotions and consciousness ”.

These challenges will probably require many other discoveries, concludes Kai-fu Lee, at least equal to and alongside that of deep learning. In short, the road to true artificial intelligence is still very long.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy