Home Ā» Altman, Pichai, Harari: hallucinations and human irresponsibility

Altman, Pichai, Harari: hallucinations and human irresponsibility

by admin
Altman, Pichai, Harari: hallucinations and human irresponsibility

For weeks we have been witnessing the exponential growth of public discourse on generative artificial intelligence, linked above all to the developments of the Californian company OpenAI. Friends are asking me for the first time in years what I think of these developments, what implications they will have on their work and on our daily lives. A lot of people are starting to get scared: not that they stay up at night because of GPT-3 or 4, but they seem to somehow rely on the few headlines circulating in the mainstream media, all centered around the catastrophismwho is slowly convincing them that something terrible is happening and that they don’t understand. The leaders of this narrative are the managers and CEOs of the US companies that develop most of the artificial intelligence models, vying for unbridled and concentrated competition in their hands. What we have called “optimists” up to now today they confide their fears about the destructive potential of their creatures to us.

Sam AltmanCEO of OpenAI: “we are a little scared, there is real damage to society”, Elon Musk argues that we are heading towards the destruction of human civilization, or Sundar PichaiCEO of Google: “artificial intelligence is dangerous, society must adapt” and again Geoffrey Hintonwho conceptualized deep learning in the 1980s and has now quit Google so he can talk freely about how dangerous AI is.

These statements, if nothing else, have a commercial interest in common. Another, recent example is Yuval Noah Hararihistorical, that through the Telegraph and theEconomist communicates that “AI could spell the end of our species”. Harari has attracted many fans over the years thanks to his books, such as Sapiensand now uses that popularity to warn humanity of a potential “AI-powered refrigerator that may have enough data about you to hack into your brain and know you better than your husband.”

Strategiko

More than ChatGPT, we should be afraid of ourselves

See also  Fortnite Champion Series Global Championship will be held in Fort Worth this September

by Andrea Monti


Shouldn’t we ask ourselves what in these narratives is not marketing, personal or corporatewhich is for keep up the hype of products and increase investments? In this scenario, leaving the whole task of talking to society about the risks of these models (which are much more concrete and current than these post-apocalyptic scenarios) to those who develop them, we’re missing out on a significant piece: people.

This is my great fear, that of do not bother in the least to make these innovations understandable, open to democratic debate and control of society. I use the plural to refer above all to Italy, where we share and resume titles without contextualizing them and without pointing out that there is more to the narrative of these companies, as Donata Columbro perfectly pointed out, but also without ever bringing these themes into Parliament. People, society, users: those who use these technologies, those who play with them, use them to write texts, university exams, to do research or generate images, struggle to understand the real impacts of what is happening. The most popular speech right now, the only one that’s really popular, he is communicating to them that humanity will end because technology will take over. Nothing that guarantees them the understanding, the tools to exploit these technologies to increase their capabilities, instead of terrorizing how these will be eliminated; no education in schools, no warning about how any technology (at least for now) is nothing more than a social, human construction, and under our control and responsibility. How much this enormous irresponsibility will cost us in the coming yearson entire generations (including my peers) who risk growing up with the idea of ā€‹ā€‹no longer serving, that everything is inevitable, and beyond our control?

Beautiful Minds

Daniela Amodei: “Claude, our AI is helpful, not harmful and honest. And kinder than ChatGPT”

See also  NetEase's Chinese-style post-apocalyptic wasteland-themed adventure shooting game "Ashfall" has released a preview of the real machine video, and the first wave of testing will start in July-Bahamut

by Eleonora Chioda


Let’s take another example: This morning, May 10, I opened LinkedIn and Twitter, where my entire bubble was commenting on the latest internal research released by OpenAI: “Language models may explain neurons in language models.” I’ve read dozens of comments like “it’s the end of humanity”; “the singularity is near”; “here’s the alignment”, and that is how from night to day we have allegedly come significantly closer to the moment when technology will mimic the human brain to the point where it will no longer be able to tell them apart. The research in question essentially communicates an internal experiment in which GPT-4 was asked to explain the behavior of the “neurons” (the nodes through which data and calculations flow) of GPT-2. Since it is still very difficult for humans to explain how an AI model achieves an output, the OpenAI scientists decided to ask the AI ā€‹ā€‹itself directly. This intuition, very interesting, could actually lead to significant advances in the research field of Explainable AI (XAI). However, it is the same company that writes that, at the moment, “the vast majority of explanations get a poor score” and that “GPT-4 provides worse explanations than human ones”, concluding that out of 300,000 neurons analyzed only 1,000 were well explained, but uninteresting. At this point, despite all the hype I read this morning, it would be appropriate to ask ourselves about the scientific relevance of these internal researches: would this experiment have passed the scientific publication process? Is it significant enough? Can we really talk about alignment, singularity, upper hand and above all, should we talk about it like this?

Artificial intelligence

ChatGPT: a weapon of mass approval?

by Guido Scorza*


My answer is: absolutely not. We are giving enormous authority to linguistic patterns treating them as if they were explaining to us how they work for themselves, as if they were really capable, in Harari’s words, of hacking our brains and getting to know us better than our spouses. But none of these technologies will wake up in the morning like in a dystopian novel deciding to ruin our lives, entrusting our work to someone else, telling us about his feelings, unless a human being asks him to do it. As he writes Naomi Klein on the GuardianIn the world of artificial intelligence there are distorted hallucinations going on, but it’s not the bots who have them”; it’s the CEOs who created themwho claim to see things that are not there at all for the moment.

See also  Greenpeace calls for ban on private jets

We are continuing to focus on machines instead of people: those who use, who have to understand, implement, but also those who develop and design. Similarly, language models won’t exhibit human behavior unless we program them to do so. The point is not to understand how much AI can take over, but decide how much space to give to human control, which at least for the moment is still essential and decisive. Various researches, such as the field of “future studies”, have shown that exposure to negative images of the future makes individuals unnecessarily worried and threatened. This does not mean that it is not necessary to discuss the risks associated with new technologies when they are not governed (imagine, I have been doing this every day for years), because this is part of the need to discuss it publicly and understand the phenomenon, but avoid feeding a narrative that distances people from technology, assuming that it is inevitable, instead of asking ourselves how to distribute responsibilities, asking ourselves what is right to do, and where to stop instead. This is a huge human responsibility.

The intervention

Seth Dobrin: “Artificial intelligence must be regulated, not stopped”

by Gabriele Franco


You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy