Home » “AI has already overtaken us”: the experts’ new document contains the proposals to stem it

“AI has already overtaken us”: the experts’ new document contains the proposals to stem it

by admin
“AI has already overtaken us”: the experts’ new document contains the proposals to stem it

“Managing AI Risks in an Era of Rapid Progress.” This is the title of the scientific “paper” signed by two “fathers” of modern artificial intelligence, Geoffrey Hinton e Yoshua Bengioand twenty-two other academics including the writer Yuval Noah Harari and the Nobel Prize in Economics Daniel Kahnemanauthor of the bestselling book “Thinking, Fast and Slow”.

Compared to apocalyptic alarms of recent monthsfocusing on the risks associated with one uncontrolled development of artificial intelligencescholars have gone a step forward, proposing concrete solutions to prevent AI from turning into a fearsome threat.

Artificial intelligence The strange appeal against AI: “We risk extinction”. But companies continue to develop them by Emanuele Capone 30 May 2023

The most important proposals concern the companies that develop artificial intelligence and the governments they will have to control existing and future AI systems.

The “paper” invites companies such as OpenAIwhich he created ChatGpt and Dall-E 3, to “devote at least a third of their budget for AI research and development to ensure safety and ethical use.” Governments, however, are asked among other things to “urgently request the registration of models [di AI, ndr]the protection of any whistleblowers, the reporting of any incidents, and a check on the models and use of supercomputers”.

Particular attention is paid to mechanisms that regulate AI. In fact, we know – as a recurring joke goes in the world of researchers – that a company like OpenAI, for example, only has the name of “open”. ChatGpt, as they say in jargon, it’s a “black box”: no one can look inside it and therefore no one knows what its “gears” are. Unlike “open source” AI of which, however, the source code is public and accessible for any modification/development.

See also  A Comparison of Ease of Use: Android vs iOS

Well the “paper” signed by Hinton and Bengio, winners of the prestigious Turing Award in 2018calls on governments to legislate so that they can “have access to advanced AI systems before their development to assess potential risks such as the ability to autonomously replicate, penetrate computer systems or spread pandemic pathogens”.

Artificial intelligence Geoffrey Hinton leaves Google: who is the Godfather of AI and why he left by Emanuele Capone 02 May 2023

The paper also states that “governments should also hold developers and owners legally accountable “Frontier AI” – the term used to define the most advanced AI – for the damage caused by their models that can be reasonably predicted and prevented.” This proposal, in particular, espouses the point of view recently expressed by Harari.

Yuval Noah Harari, the historian who conquered the world with his books on the past and future of humanity (from “Sapiens” a “Homo Deus”), is worried about a future in which it will be possible to create, with extreme ease, billions of “fake people”.

Harari went so far as to ask for “20 years in prison” for all those who create “false people” using AI.

“If you cannot distinguish a real human being from a fake one – Harari said during a conference organized in Geneva by the United Nations – trust will collapse. And with it the free society. Maybe dictatorships will manage to get by somehow, but not democracies.”

Artificial intelligence AI and the “Oppenheimer moment” by Pier Luigi Pisa 19 July 2023

See also  SCUF and Avenged Sevenfold team up to create new limited-edition SCUF Instinct Pro and SCUF Reflex FPS controller bundles

The paper “Managing AI Risks in an Era of Rapid Progress” It also calls on governments to “be prepared to license the development of certain artificial intelligence and suspend development in response to concerning capabilities of AI systems.”

The document often highlights the need to take “urgent” measures. This is because, as its title states, artificial intelligence grows and improves extremely quickly.

“In 2019, GPT-2 could not reliably count to ten – write the signatories of the “paper” -. Just four years later, deep learning systems can write software, generate photorealistic images on demand, give advice on intellectual topics, and combine language and image processing to guide robots. As AI developers scale these systems, unexpected abilities and behaviors emerge that are not explicitly programmed. The progress in AI has been rapid and, for many, astonishing.”

“There is no reason to think that AI progress should slow down or stop at the human level. In fact, AI has already surpassed human capabilities in specific areas such as protein folding or strategy games – we also read in the document -. Compared to humans, AI systems can act faster, assimilate a greater amount of knowledge and communicate at a much higher transmission speed. Furthermore, they can be scaled to use immense computational resources and can be replicated in millions of copies.”

As Hinton and Bengio continue to raise awareness among the public and governments about AI that could cause harm to people or society, asking guidelines and laws that prevent catastrophic scenariostheir colleague Yann Lecun – also awarded the Turing Award in 2018 for his fundamental contribution to the development of machine learning – reiterates that “artificial intelligence will never be a threat to human beings”.

See also  Perfect Dark and State of Decay 3 Gameplay In-Depth May Be Closer Than You Think

This is what LeCun declared, head of Meta’s AI (the company that controls Facebook, Instagram and WhatsApp), al Financial Times. His interview was published last October 19, a few days before the release of the paper on “AI risk management” signed by Hinton and Bengio.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy