Home » From Perceptron to Google: Whenever we thought an AI was sentient

From Perceptron to Google: Whenever we thought an AI was sentient

by admin
From Perceptron to Google: Whenever we thought an AI was sentient

“This machine will be the first device to think like the human brain”, representing “the embryo of an electronic computer that is thought will be able to speak, walk, see, write, reproduce and be aware of its existence “. Furthermore, the most advanced models will be able to “recognize people, call them by name and instantly translate from one language to another”.

These lines are taken from an article in the New York Times titled The electronic brain that teaches itself, where many of the expectations that still surround artificial intelligences are reported today. The article in question, however, is over 60 years old: was published on July 13, 1958 and stars Mark I Perceptron, a pioneering artificial intelligence system that was being presented to the public that very day. Despite the extraordinary expectations, during his first appearance the Perceptron was only able to distinguish right from left: “But it is thought that it will be finished within a yearat a cost of 100 thousand dollars ”, the American newspaper always reported with confidence.

Interview

The future seen by Annalisa Barla, the teacher who teaches machine learning

by Emanuele Capone


Funded by the US Navy and created by Frank Rosenblatt, a psychologist at Cornell University, the Perceptron was a gigantic machine, thickly covered in cables and composed of motors and knobs connected to 400 light detectors. An instrumentation that allowed to simulate the behavior of 8 neurons, in turn able to learn using a method called supervised learning.

Although it was a rudimentary system, with only two layers of neurons, one for the input (the data) and the other for the output (the results)the basic functioning of Perceptron was the same as in today’s deep learning (whose neural networks have billions of nodes): a training system based on trial and error, in which the connections that produced the correct result are strengthened, and those who have done wrong are weakened.

The path followed by Cornell (inspired by the theories of McCulloch and Pitts, the first to create the computational model of a neuron) was correct, but what we have today in large quantities was missing: computing power and immense amounts of data. It was above all this deficiency that caused the failures of the very first neural networkswhich instead of reaching self-awareness were quickly filed away.

See also  Clarissa Molina and Univision Host Transforming Networks with Viral Yacht Dance

In 1966, it was Marvin Minsky, pioneer of the so-called symbolic artificial intelligence (instead of learning independently from the data, it follows the instructions and rules provided by the programmers), to give life to another experiment surrounded by high expectations. In his lab at MIT in Boston, he tried to connect a video camera to a computer to allow it to see. Reading a fundamental essay by Alan Turing (Computing Machinery and Intelligence)Minsky was convinced that to equip the machine with a real brain it was first necessary to provide it with the sight, hearing, smell and other senses necessary to experience the world around it.

The experiment failed and Minsky soon had to give up: it was not possible to replicate the human eye by connecting a camera to a computer. According to the scientist, however, in a short time it would be possible to give life to a truly intelligent AI. In an interview with Life Magazine in 1970, declared that “in the space of 3-8 years we will have a machine with the same general intelligence as the average human being. A machine that can read Shakespeare, polish a car, deal with company politics, tell a joke and fight. At that point, the machine will begin to train itself at an incredible speed. In a few months he will reach the level of genius and, a few months later, his powers will be incalculable ”.

More than fifty years have passed since then and none of this has happened. Around 2012, the increased computing power and the amount of data available have revived neural networks (already at the base of Perceptron) and allowed to kick off the deep learning revolution, thanks to which extraordinary successes have been achieved in the field of artificial intelligence. Deep learning (now used for image recognition, for translationsto predict what we’d like to see on Netflix) quickly rekindled hope that the creation of conscious AI might be around the corner.

See also  SUP and canoeing in Hamburg off the beaten track | > - Guide - Travel

One of the most intriguing examples is certainly that of GPT-3, the neural network specializing in language processing natural presented in 2021 by OpenAI, the research firm on artificial intelligence owned by Microsoft. If the 1950s Perceptron possessed 400 light detectors and was able to simulate eight neurons, GPT-3 is instead equipped with 175 billion parameters and used 450 GB of information for training, which also contained the entire English-language Wikipedia.

This amount of data and computing power has allowed GPT-3 to achieve extraordinary results, including the drafting of a long editorial published in the Guardian, in which he flawlessly articulates the reasons why humans shouldn’t be afraid of the most powerful deep learning algorithm ever seen at work: “Why, you might ask, would humans purposely put themselves in danger? Isn’t the human being the most advanced creature on the planet? Why should they think that something inferior, in a purely objective sense, could destroy them? ”. To these questions, GPT-3 then promptly replied citing Matrixanalyzing the consequences of the Industrial Revolution, discussing the etymology of robots (“forced to work”) and much more.

It is still possible to define unconscious a software able to make such subtle arguments? Actually, yes. Not only because GPT-3 made use of the collaboration of a human editor, who made a careful cut and sew of the dozens of versions produced by the machine to create a better one, but above all because (as explained Gary Marcus, a neuroscientist at New York University) systems like GPT-3 “do not learn what is happening in the world, but learn the way people use words in relation to other words”. Ultimately, their work is a kind of colossal statistical copy-paste, in which the machine predicts which sentences within its immense database have the more chances to be more or less coherent with previous sentences, without however having any understanding of what is actually being said.

See also  The athletes of the Swiss national team wear X-BIONIC underwear

Analyses

Conscious deep learning is still a utopia

by Andrea Daniele Signorelli



Lastly, then there was the case of LaMDA, the Google system who convinced his programmer, Blake Lemoin, to be sentient, giving him often very coherent answers to the questions. On closer inspection, however, it is noticeable that LaMDA has only replicated the behavior of humansidentifying the answers that were most likely to be convincing.

As he reported the scientist Aenn Matya on Twitter, to the question “what makes you happy?”, addressed by Lemoin, LaMDA for example replied: “Spending time with friends and family”. Being a software obviously devoid of friends and family, this answer demonstrates how it is only imitating human behavior, without any self-awareness. LaMDA has therefore learned to statistically stitch together the billions of data received and thus reproduce the voice of human beings.

There are also many other famous cases in which we have mistaken a more or less credible reproduction for conscience and intelligence: Eliza is the bot created in 1966 by Joseph Weizenbaum to imitate the behavior of a psychologist and which was believed to be credible by numerous patients (you can try it here). Eugene Goostman is instead the bot who, pretending in 2012 to be a Ukrainian boy who spoke in English, managed to convince 30% of the judges of a competition of be a real person, thus passing (albeit with a few tricks) the Turing test.

From the 1950s to today, therefore, the cases in which we thought that intelligent machines were among us or were in any case around the corner have cyclically repeated themselves. Yet, in each of these cases, it was we ourselves who projected awareness and intelligence on machines designed to imitate us in the best possible way. A strange short circuit, which perhaps says more about the aspirations of human beings than the intelligence of machines.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy