Home Ā» AI and the “Oppenheimer Moment”

AI and the “Oppenheimer Moment”

by admin
AI and the “Oppenheimer Moment”

Taking a cue from his new film, dedicated to one of the main creators of atomic bombdirector Christopher Nolan spoke of ā€œOppenheimer momentā€ for those who work in AI today.

ā€œArtificial intelligence will eventually control our nuclear weapons ā€“ Nolan said ā€“ if we think that AI is a distinct entity from those who develop and wield it, then we are doomedā€.

ā€œWe have to hold people accountable for what they do with the tools at their disposal,ā€ added the director of ā€˜Oppenheimerā€™in theaters from August 23 next.

Christopher Nolan, director of Oppenheimer

Nolan is not an expert in artificial intelligence, but like other intellectuals he is particularly restless towards this technology. And the new work of him on Robert Oppenheimerbased on the 2006 Pulitzer Prize-winning biography of the physicist, evidently did nothing but amplify his anxieties.

Called up in 1942 to head the Manhattan Projectthe US atomic program that led to the Bombe di Hiroshima e NagasakiOppenheimer was aware that he was working on a lethal weapon.

Magazine Scientific American writes that Oppenheimer, in the same week that he helped optimize the bomb blast, ā€œwas heard muttering ā€˜those poor little peopleā€™ on his morning walks.ā€

Physicist Robert Oppenheimer, at right, points to photo of atomic bomb explosion over Nagasaki

And Oppenheimer himself, following the first detonation of an atomic bomb in the New Mexico desert, on July 16, 1945he would whisper: ā€œNow I become Death, the destroyer of worldsā€. They werenā€™t his words but one of the verses of the Bhagavad-Gitthe ā€œGospel of Indiaā€ dear to the faithful of Hinduism.

See also  Image recognition turns reflecting objects into cameras

After the war, Oppenheimer sat with President Truman to talk about international control of nuclear weapons, telling him, ā€œI feel like I have blood on my hands.ā€

Similarly, today, the fathers of modern artificial intelligence they come to terms with their conscience, and with a technology that ā€“ if used in the wrong way ā€“ could cause enormous damage to human beings. One of these, Geoffrey Hintonā€œNobelā€ prize for computer science in 2018 for its precious work on neural networkshas recently left the company he worked for for ten years, Googleto feel free to denounce the risks associated with the uncontrolled development of AI.

Artificial intelligence Geoffrey Hinton leaves Google: who is the Godfather of AI and why he left by Emanuele Capone May 02, 2023

Hinton, famous and respected for his studies on the back propagation of the error, an algorithm that allows machines to learn, has come to regret the work it has done over the past forty years. And al New York Timesduring one of his first interviews ā€œas an unemployedā€, said something that makes you think: ā€œI console myself with the usual excuse: if I hadnā€™t done it, it would have been someone elseā€™s turnā€.

The entrepreneurs who train the main models do not have the same purity of mind generative artificial intelligence, able to write as a man would. It comes to think of Sam Altmanwho asked the US government ā€“ and then the whole world ā€“ for rules for the AI ā€‹ā€‹that he himself develops with OpenAIa company that pursues profit and certainly not the welfare of society.

See also  18 heated moments in Elon Musk and Mark Zuckerberg's epic feud

Artificial intelligence The strange appeal against AI: ā€œWe risk extinctionā€. But companies continue to develop them by Emanuele Capone May 30, 2023

Among the main criticisms leveled at generative AIs, and therefore also at Altmanā€™s creature, ChatGptthere is the ease with which these chabots can put themselves at the service of those who want to spread fake news.

Yuval Noah Hararithe historian who conquered the world with his books on the past and future of mankind (from ā€œSapiensā€ a ā€œHomo Deusā€), is worried about a future in which it will be possible to create, very easily, billions of ā€œfake peopleā€.

Harari has gone as far as calling for ā€œ20 years in prisonā€ for anyone who will create ā€œfake peopleā€ using AI. ā€œIf you canā€™t distinguish a real human being from a fake one ā€“ Harari said during a conference organized in Geneva by the United Nations ā€“ trust will collapse. And with it the free society. Perhaps dictatorships will manage to get away with it somehow, but not democraciesā€.

Startup The 13 rules to create a successful startup according to Sam Altman (the father of ChatGTP) by Gabriella Rocco 27 ā€‹ā€‹May 2023

Ironically, the possible solution to this problem could create an even bigger one. Just look at one of the startups in which Sam Altman has invested his money.

We are talking about Worldcoinwho created a device ā€“ called the Orb ā€“ capable of scanning the iris of its owner and using biometric data to verify your identity. And to rule out with certainty that we are dealing with one of the many amazing artifacts generated by artificial intelligence.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy