Home » piqd | AI-Doomerism im Reality Check

piqd | AI-Doomerism im Reality Check

by admin

Blake Richards (neuroscientist), Blaise Agüera Y Arcas (Google Research), Guillaume Lajoie (mathematician) And Dhanya Sridhar (Computer Science) subject the far too much publicized debate about an alleged annihilation of humanity by “rogue AI” to a reality check and come to the conclusion that the discussion largely ignores the facts and obscures the view of actual, more subtle dangers that are already effective today.

The scholars emphasize that the basic assumptions and terminology of the debate alone prevent an informed discussion. For example, the term “superintelligence,” introduced by the philosopher Nick Bostrom in his book of the same name, is misleading simply because while we have plenty of theories about what intelligence is and how it works, we have no real consensus on it, quite apart from the fact that intelligence has always been difficult to quantify and that there are different forms of intelligence.

Even the often-heard argument that a much more intelligent “species” such as AI would automatically lead to the displacement and ultimately to the annihilation of mankind does not stand up to closer scrutiny. The researchers find examples of species being wiped out by humans, but none of species being wiped out by other species. There are also cases where the emergence of an unintelligent species led to the extinction of more cognitively capable species, such as through progressive evolution of flora leading to the extinction of species in the fauna. In addition, the evolutionary modus operandi of the coexistence of the vast majority of species is not a relationship of dominance and submission, but interdependence, i.e. mutual dependence.

See also  Boulders are rolling around, and Perseverance found strong evidence of a large amount of water pouring out of Mars | Science and Technology News | LINE TODAY

A complete annihilation of humanity would require the complete automation of hundreds of processes in the economic cycle, from the extraction of metals in mines, their trade, their treatment and processing, the construction of reactors, chip factories and data centers to the laying of undersea cables for digital communication. It is extremely unlikely that we will fully automate these processes without building in human decision-making switches. Even the twin factories that already exist today, i.e. factories that are mirrored and controlled in digital simulations, allow the integration of human decision-making points. A rogue AI could simply be stopped in hundreds of places. Any AI regulation of the future will mandate such security measures, just as there are fire regulations today.

The article is a nice cold shower for the overheated AI doomers, above all Eliezer Yudkowski, who in my opinion have not put forward a single really convincing argument for their position that an alleged superintelligence would wipe out humanity “in an instant”.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy