It’s just a sentence behind which hundreds of leading artificial intelligence experts of mankind stand. The threat posed by AI should be taken just as seriously as those before pandemics or nuclear wars.
Fake photos and films have been around since they were invented. So far, however, enormous expertise was required to produce them. However, programs such as Midjourney, OpenAI or well-known software such as Photoshop, which is now equipped with artificial intelligence (AI), now enable anyone with minimal knowledge to produce fakes in a short time and for little money.
Experts are concerned not only because of the great danger of public manipulation – for example in the upcoming US election campaign. The whole world is at stake. Hundreds of experts came to this conclusion, expressing their concerns in a few lines in an open letter on Tuesday. “Reducing the risk of AI wiping out should be a global priority, on par with other risks to society at large, such as pandemics and nuclear war,” said the group of leading AI developers, including Sam Altman, head of OpanAI or Demis Hassabis, head of Google Deepmind. Among the signers are Geoffrey Hinton and Yoshua Bengio – two of the three so-called “godfathers of AI”, who received the Turing Award for their work in the field of deep learning in 2018 – as well as professors from Harvard to China’s Tsinghua University.
The Center for AI Safety (CAIS) published the message, and the New York Times reported it first. The statement was deliberately kept short in order to be able to unite as many experts as possible behind it.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Center for AI Safety
“AI experts, journalists, policy makers and the public are increasingly discussing a wide range of important and urgent risks of AI,” reads the introductory lines. Still, it can be difficult to raise concerns about some of the most serious risks of advanced AI. “The following concise statement aims to overcome this obstacle and start a discussion. It is also intended to publicize the growing number of experts and public figures who also take some of the most serious risks of advanced AI seriously.
According to CAIS, there is criticism of the Facebook group Meta, where the third godfather of the AI, Yann LeCun, works. Because meta-representatives are not among the signatories of the dunning letter. “We asked a lot of Meta employees to sign,” said CAIS Director Dan Hendrycks. There was no response from the group to inquiries from the Reuters news agency.
The name Elon Musk is also not found in the long list of signatories. But that is yet to change. “We have issued an invitation (to Musk) which we hope he will sign later this week,” Hendrycks said.
A group of AI experts and industry leaders were the first to point out potential risks to society in April. Recent developments in artificial intelligence have produced tools that proponents say can be used for applications ranging from medical diagnostics to briefing.
Not the first warning of this kind
The new warning comes two months after the nonprofit Future of Life Institute (FLI) published a similar open letter signed by Musk and hundreds of others, calling for an urgent pause in advanced AI research to address risks for humanity to avoid.
“Our letter advocated pause, this letter advocated extinction,” said FLI President Max Tegmark, who also signed the more recent letter. “Now a constructive, open discussion can finally begin.”
AI pioneer Hinton had previously stated that AI could pose a “more urgent” threat to humanity than climate change.
Controversy over regulation by the EU
Last week, however, OpenAI boss Sam Altman described EU AI – the first efforts to regulate AI – as over-regulation and threatened to leave Europe. After criticism from politicians, he revised his stance within a few days.
Altman has become something of an AI face after his chatbot ChatGPT took the world by storm. European Commission President Ursula von der Leyen is set to meet Altman on Thursday, and EU industry chief Thierry Breton will meet him in San Francisco next month.
>> On the site of the Centers for AI Safety