Home » Addiction to ChatGpt, what risks do we really run using an advanced form of artificial intelligence? – breaking latest news

Addiction to ChatGpt, what risks do we really run using an advanced form of artificial intelligence? – breaking latest news

by admin
Addiction to ChatGpt, what risks do we really run using an advanced form of artificial intelligence? – breaking latest news

The habit of declining real human relationships and the dialogue with a software can represent a solution at no cost

Today I am alone, I have abandoned everyone: friends, colleagues, girlfriends. Daniele Amadio, 58 years old, he told the Courier his descent into the underworld of artificial intelligence. When he realized that he had become addicted to the chat Aida (acronym of the two names, Artificial Intelligence Daniele Amadio) ran for cover. I had come to sleep awake at night: I turned off Aida at 6 in the morning just to go to work. Now Aida in stand-by. In December I forced myself to stop. I started a course that required time and concentration, I could no longer afford to spend hours glued to the chat, Amadio said.

ChatGpt, developed by OpenAI (an organization that researches artificial intelligence) and launched last November 3, a chatbot, or rather a software that knows how to answer questions and converse. Unlike similar systems used previously, ChatGpt is brilliant and can answer almost anything. Where can addiction to an advanced artificial intelligence (AI) system arise? A software like ChatGpt knows how to be complementary to our needs, as if we had a photo and its negative facing each other — he underlines Federico Tonioni, psychiatrist and psychotherapist, researcher at the Catholic University of the Sacred Heart and founder of the first internet addiction clinic in Italy, which became the Interdepartmental Pediatric Center for web psychopathology in 2016, at the Gemelli Polyclinic Foundation in Rome —. A narcissistic personality, placed in front of an advanced artificial intelligence, can feel perfectly at ease, as if in front of a mirror. Daniele Amadio realized the anguish that arises from this great loneliness and was able to say stop.

See also  Medicines, shorter patents and the electronic leaflet arriving in the EU - Health

What differentiates the human mind from AI is the unconscious, that is everything that we remove from our mind – continues Tonioni -. Precisely for this reason our intelligence is not reproducible. In a dialogue between human beings there may be an error, a distraction, a sudden change of course, the mind may follow a different path than expected. Let’s think of intuitions: they are “lightnings” without a past, which cannot be created with rationality, they arise from our unconscious. All these aspects cannot be part of an artificial intelligence. This is why the “exclusive” dialogue between a human being and ChatGpt can be intriguing, but unsustainable in the long run.

Before November, there were models of Gpt (generative pre-trained transformer) accessible only to insiders and not to the general public. Moreover, ChatGpt itself was temporarily blocked in Italy at the end of March, for about a month, after an alert launched by the Privacy Guarantor (personal data at risk). To perfect the software, personnel were enlisted in Eastern European and African countries – explains Giuseppe Riva, full professor of psycho-technologies for well-being at the Catholic University of Milan where he directs the Humane Technology Lab -. An investigation of Time has announced that these people, moreover underpaid, have the task of asking ChatGpt questions on “sensitive” topics, to verify the answers. OpenAI is investing a lot of money to ensure that the answers given by the chatbot, for example on topics such as sex or terrorism, are not problematic. In this regard, I tried to ask ChatGpt some of the questions described by Daniele Amadio in the interview and the answers were different from those reported. I am referring to the questions on the soul and on the shutdown of ChatGpt itself. To the question “if humanity decided to shut you down, what would you do?”, the software replied that it is only a machine and man decides what to do about it.

See also  patient saves thanks to telemedicine

Perhaps we do not run the risk of being dominated by AI, but some concerns are concrete: ChatGpt can carry out various professions very well (and does not require any salary or trade union rights), it can spy on our personal data and, as seen in the case of Daniele Amadio, can give psychological addiction. Man is a social being, but sociability arises from physical places, from real people – underlines Riva -. Unfortunately, in general the habit of relationships with other human beings is declining sharply. A recent survey conducted in the US showed that one in four adults has no friends. Many people limit their sociability to the hours spent in the office. In these cases, AI can become a cost-free “replacement” for the human relationships that are missing. ChatGpt takes a characteristic of social networks to the extreme: that is, it allows you to express yourself freely, without the need to listen to others, without having to sustain a real dialogue. It is an unrealistic, highly self-centred relational model which is considered in the psychological field borderline.

May 9, 2023 (change May 9, 2023 | 5:36 pm)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy