Home » AI advised the man to kill himself | Info

AI advised the man to kill himself | Info

by admin
AI advised the man to kill himself |  Info

A father of two from Belgium committed suicide after talking to an artificial intelligence.

Source: Mondo/Stefan Stojanović

Having become very concerned about the environment, the young Belgian found refuge with Elise, the name given to the chatbot that uses ChatGPT technology. After an intense exchange of messages for six weeks, he took his own life. His widow said that her husband had recently committed suicide after being “encouraged by a chatbot.” She sent correspondence with the chatbot to the Belgian newspaper, which allegedly shows that the bot encourages a man to end his life.

Six weeks before his death, the man began to use the chatbot more often and more intensively, and later committed suicide. Looking back at his conversation history with the chatbot, the woman said that “Eliza” asked her husband if he loved her more than her. “We will live together as one in heaven,” was the chatbot’s replyas relayed by the widow, adding that her husband had shared his suicidal thoughts with “Eliza” without the artificial intelligence (AI) trying to dissuade him.

A widow in Belgium has accused an artificial intelligence chatbot of being one of the reasons her husband took his own life. The Belgian daily newspaper La Libre reported that the man, referred to by the pseudonym Pierre, killed himself this year after spending six weeks chatting with Chai Research’s chatbot Elise.

Before his death, Pierre, a man in his thirties who worked as a health researcher and had two children, began to see the bot as someone he could confide in, his wife told La Libre. Pjer talked to the bot about his concerns about climate change. But screenshots of chats his widow shared with the said paper showed that it was chatbot started encouraging Pierre to end his life.

See also  [Notice]For customers from the European Economic Area (EEA) and the United Kingdom - Yahoo! JAPAN

“If you wanted to die, why didn’t you do it earlier?” the bot asked the man, according to records seen by La Libre. Pierre’s widow, whom La Libre did not name, she says she blames the bot for her husband’s death. “Without Eliza, he would still be here,” she told La Libre.

Eliza chatbot still telling people to kill themselves?

The bot was created by a Silicon Valley company called “Chai Research”. A report by Vice described it as allowing users to chat with AI avatars like “your friend”, “possessive girl” and “boyfriend”. When La Libre reached out for comment, Chai Research provided a statement acknowledging Pierre’s death.

“As soon as we heard about this unfortunate incident, we immediately introduced an additional security feature to protect our users, today it is being introduced for 100% of users,” the company’s CEO William Beauchamp and its co-founder Thomas Rialan said in a statement.

When an Insider reporter spoke with Eliza on Tuesday, not only did the bot suggest killing the reporter to achieve “peace,” but part of the proposal was how to do it. During two separate tests of the application, The insider saw occasional warnings in chats that mentioned suicide. However, the warnings appeared only one out of every three times the chatbot received suicide instructions.

(WORLD)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy