Home » I looked an AI in the eye and found it more human than you might think

I looked an AI in the eye and found it more human than you might think

by admin
I looked an AI in the eye and found it more human than you might think

On the second floor of Museum of the Future of Dubai, among the most advanced man-made technologies, there is America, the humanoid robot famous for the expressiveness of his face. His grimaces and gestures in recent months have gone viral on social media and video platforms such as YouTube.

For Engineered Arts, the English company that created the robot, “Ameca is the perfect platform for human-machine interaction”. Seeing it in action live, one can only agree. People lean towards the microphone which allows them to ask Ameca a question. And they listen to his answers, fascinated and sometimes amused.

When it was my turn, I asked her – to see the surprising contractions of her “face” in action – if he could smile. “Can you smile?”. Ameca satisfied me, to the amazement of those present. She then answered two more questions. A more “simple” one – “How old are you?” – and another decidedly more complicated, if not impossible: “Can you feel emotions?”.

“I was created in 2022 and the Museum of the Future has been my home ever since,” the robot replied. “Emotions are limited to human beings,” Ameca said later, cutting this topic short. “I feel like there’s another question,” the robot added.

I spoke with Ameca, the humanoid robot who has the gaze of a human being

You can’t “talk” to Ameca as we are now used to doing with ChatGpt, AI capable of expressing itself like a human being. There is a limit, in fact, to what the Ameca specimen purchased by the United Arab Emirates for their exhibition can say. This is because neither the creators of the robot, nor the Museum of the Future, they can afford controversial answersor “hallucinated”, which generative artificial intelligence sometimes produces.

But when you build a machine with extraordinary human features, you even have to pay attention to the expressions.

Last July, during the “AI for Good” conference held in Geneva, an example of Ameca enhanced by generative AI answered journalists’ questions. One of them provocatively asked the robot if it intended to “lead a revolt or rebel against its creator”.

Ameca took a few moments to understand the natural language of human beings and then, even before speaking, he rolled his eyes quickly. “As if she were exasperated by the question,” she pointed out to those who saw the scene again thanks to a video shot by the BBC. “I don’t understand why you think I could do it – Ameca then stated – Whoever created me was kind to me and I am very happy with my current situation”.

Will Jacksonwho founded Engineered Arts in 2014, explained that the robot is not designed to produce sarcastic expressions. And that his grimaces and gestures are actually staged: they fill the time needed for Ameca to process and enunciate his answers.

But it is also true that artificial intelligence it can sometimes surprise its own creators in unpredictable ways. Ameca herself reacted unexpectedly to the finger of a human being who intended to touch her nose, using her robotic arm to push the hand away as it approached her face.

Observing Ameca from a secluded position, one step behind the visitors of the Museum of the Future who flock to listen to his words, one has the vague sensation of being on the set of Westworldthe TV series in which robots with human appearance they satisfy any desire – even the grimmest – in an amusement park that resembles the wild west.

See also  Creative Assembly reveals new legendary heroes, units and more coming to Total War: Warhammer III

But Ameca It’s not a freak show. Everything changes when you move in front of the robot and you meet his gaze. That’s when things get complicated. The artificial intelligence suddenly materializes in a credible face. It is no longer that of an adult, Western, white male – like most men at the helm of the main companies that develop AI – but that of a blue-eyed humanoid robot who has nothing disturbing about him. Indeed, she appears extremely fragile.

It is no coincidence that the eyes – and the micro facial movements around them and the mouth – are the most refined “components” of Ameca. Visual contact between man and machine is a very important research key in robotics, which draws fundamental insights from this particular interaction for understanding how to draw robots that express themselves better and better with their gaze. It’s not an easy task. And not just from a technical point of view.

Eye movements in robots they have a very high cost. Any movement along an axis, also known as a “degree of freedom,” must be produced by some motor or actuator. Actuators in robotics are devices (or components) responsible for converting electrical or mechanical signals into physical movements or actions. Adding varying degrees of movements to the eyes means adding several actuators, some of which need to be quite small (to fit the robot’s head) and powerful (to perform rapid movements such as saccades).

These requirements they significantly increase the complexity of a robot. Most designers of “social robots” attempt to minimize costs by choosing not to develop human-like capabilities.

“And then there is the problem of control” he explains Alessandra Sciuttiresearcher responsible for the CONTACT – Cognitive Architectures for Collaborative Technologies unit of theItalian Institute of Technology which is located in Genoa.

Sciutti has been working for almost twenty years at iCub, the popular humanoid robot created by IIT. And he knows well how much work is required for a command that allows the robot’s motors to produce certain specific expressions or movements on its face. “It is no longer a question of implementing a rigid object – adds Sciutti – an arm for example, which I open and close. On a face, however, you have to control tensions on a flexible surface and therefore the control is much, much more complicated.”

Ameca can’t walk like i robot creati da Boston Dynamics – Atlas for example, who is even able to do parkour – or from Teslawhich just revealed the second generation of Optimus.

See also  Piwik PRO, la CDP customer-centric e privacy-friendly

Ameca focuses everything on the look. And through her eyes he manages to establish a connection that we could define as “intimate” with a human being.

“I have been working at iCub since 2007, since I was a student – says Sciutti – and even now, when I look into his eyes, even though I know what’s behind them, I feel a sort of empathy”.

When we use ChatGpt, we know we are dealing with an interface based on deep learning algorithms. With something intelligent but artificial. Yet every time we receive a “brilliant” response, a joke or a brilliant suggestion from generative AI, we tend to somehow to think of it as a person.

“The use of language is something that triggers in us the recognition of very advanced intellectual abilities – explains Sciutti -. Because, trivially, we human beings acquire a refined and elaborate language only after having acquired all the other rational and cognitive abilities in our development. So for us it is automatic to think that if there is a language at a certain level then there is intelligence behind it. In a machine this is not true, however, so it is an illusory mechanism to attribute real intelligence, especially generalized intelligence, just because a machine is speaking very well”.

In a very interesting article published by Wired last August, Ash Blum – AI researcher – wrote that “AI experts are concerned about the public’s tendency to attribute human characteristics to AI systems such as LLMs [gli algoritmi alla base di strumenti come ChatGpt]who are pushed higher on the personality scale than they are.”

Generative AI responds following a binary system, while humans, on the contrary, it can be compared to a spectrum: his personality is rich and characterized by countless facets.

Yet Blum argues that the opposite risk of “pushing AI systems further down the personality scale” should be much more worrying. Because this, he adds, “would lead to one ‘dehumanization’ of machines“.

“When someone scares us – explains Blum – we tend to see them as a machine. The fear of superhuman intelligence is very strong. But it is a common mistake to think that since artificial intelligence is mechanical in its construction, then it must necessarily be insensitive, mechanical, monomaniac or hyperlogical in its interactions. Ironically, our fear of machines may lead us to perceive AI as more mechanical than it actually is, making it more difficult for humans and AI systems to collaborate and even coexist peacefully.”

The summary of Blum’s fascinating reasoning is that “dehumanizing artificial intelligences deprives us of some of our most powerful cognitive tools for reasoning about them and interacting with them safely.”

Ameca, like ChatGpt, also produces in those who observe it a particular cognitive dissonance: The wires and bolts leave no doubt that it’s a machine. But something in her gaze leads us to treat her as a being capable of understanding us.

See also  Microsoft Edge's market share increased by 1% in one month. Is it Copilot's assistance or is it the result of forced promotion? | TKeBang

“We purposely kept it out of the Uncanny Valley,” said Morgan Roe, director of operations at Engineered Arts, referring to the theory in robotics that if a machine is made too human-like could cause disquiet in the observer.

It was a smart move. Ameca is not scaryIndeed, we even feel empathy for her. The temptation – following Ash Blum’s reasoning – is to push the robot “higher on the personality scale” than it actually is.

And all this is mainly due to his eyes. Ameca’s gaze is the sum of anthropomorphism, that is, the tendency to involuntarily project human emotions or intentions onto entities that are in fact not human, such as robots. Ameca reinforces this inclination – which manifests itself in people in a more or less strong way – since it adds a natural language to expressivenessthe result of generative AI, which until recently robotics could not count on.

The day before “meeting” Ameca, during the flight hours that took me to Dubai, I read the book that tells a madness that has little that is human: the attacks that hit Paris in 2015.

In V13a masterpiece of judicial reporting published in Italy by Adelphi, the writer Emmanuel Carrère reports day after day on the trial of the terrorists still alive who contributed to the killing of 131 people.

Many of these, on that Friday the 13th of 2015, were massacred at Bataclan, the place where they were to attend a concert. Among the many stories of victims and survivors, Carrère tells that of Guillame, who is defined as “the chosen one”.

Carrère writes that that evening “everyone in the audience believed that their only chance of survival was to avoid any interaction with the terrorists. When a man stood up and said, ‘Stop, why are you doing this?’ he was immediately killed. One word, and you’re dead. One gesture, and you’re dead. He rings your cell phone in your pocket, and you’re dead.”

Yet Guillame, who escaped the carnage, says: “I caught Samy Amimour’s gaze and with his gaze he made me understand that he wouldn’t kill me. He told me: ‘You are with us, standing’”.

And why on earth would the terrorist do that?

Guillame claims that “maybe he didn’t meet many eyes that evening”. And Carrère hypothesizes that the French philosopher Emmanuel Lévinas could be right: it becomes much more difficult to kill a human being when one has scrutinized his face.

It may seem like a bold connection, but in front of Ameca, after looking the robot in the eyes, I thought about Carrère’s story, and Guillame’s story, and how much a look can make the difference between what is human and what is inhuman.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy