Economy artificial intelligence
ChatGPT developers warn of human extinction caused by AI
Status: 05:51 | Reading time: 2 minutes
Warns of the dangers of artificial intelligence: OpenAI co-founder Ilya Sutskever (r.)
Which: REUTERS
Artificial intelligence is developing at breakneck speed. The leaders of ChatGPT provider OpenAI warn: The enormous power of “superintelligence” could lead to human extinction. A research team is to identify control options.
ChatGPT provider OpenAI plans to assemble a new research team and invest significant resources to ensure the safety of artificial intelligence (AI) for humankind. “The tremendous power of ‘superintelligence’ could lead to human disempowerment or even human extinction,” wrote OpenAI co-founder Ilya Sutskever and OpenAI’s future direction department head Jan Leike in a blog post. “Currently, we have no way of directing or controlling a potentially super-intelligent AI and preventing it from going its own way.”
The blog’s authors predicted that super-intelligent AI systems, smarter than humans, could become a reality within this decade. Humans will need better techniques than are currently available to control super-intelligent AI.
Microsoft will therefore use 20 percent of its computing power over the next four years to work out a solution to this problem. The company is also in the process of putting together a new research group called the “Superalignment Team” for this purpose.
also read
The potential dangers of artificial intelligence are currently being intensively discussed by researchers and the general public. The EU has already agreed on a draft law for regulation. In the USA, a classification of AI applications into risk classes, based on the EU model, is being discussed, on which the requirements for companies depend.
also read
In March, a group of AI industry leaders and experts signed an open letter citing potential risks of AI to society and calling for a six-month hiatus in developing systems more powerful than the OpenAI version GPT- 4.