Home » The greatest danger of runaway AI isn’t killer robots

The greatest danger of runaway AI isn’t killer robots

by admin
The greatest danger of runaway AI isn’t killer robots

Humanity’s greatest danger is something that doesn’t exist. It’s called breaking latest news: Artificial General Intelligence. This is the hypothesis that artificial intelligence can become super intelligent. Able not only to replicate what humans do, but to understand, learn and perform all the tasks that a human can perform. And do it better. All the better because thanks to his own superiority and acquired self-awareness, he could decide to wipe us off the face of the Earth if he no longer deems us necessary. The breaking latest news could wake up at any moment, like Cthulhu in HP Lovecraft’s novels, as soon as the algorithms that animate it have found the right formula to do so. But breaking latest news today is little more than a possible scenario. It is not known when, or if one day there will be.

Yet it is out of fear of its advent that a few months ago, 350 entrepreneurs and academics signed a letter asking for a six-month moratorium on the development of artificial intelligence (among them Elon Musk, head of Tesla, and Steve Wozniak, co-founder of Apple and leading scholars of machine learning). Fear shared by Sam Altman, number one of OpenAi, the company that created ChatGpt, which last month went around the world to shake hands with the heads of state and illustrate both the possible risks, but also the concrete opportunities of the artificial intelligence. The two probably hold each other.

Morozov: “There is an AI lobby and it responds to the logic of digital neoliberalism”

The Belarusian sociologist Evgeny Morozov in an article in the New York Times tried to connect the dots. And he hypothesized that these fears have led to the creation of a lobby of entrepreneurs and academics convinced that, thanks to their action, artificial intelligence, once secured, will be able to save humanity from itself. It will improve the efficiency of states, of companies, and will promise “the solution to all humanity’s problems, including those too complex to be solved such as climate change”. For Morozov it’s all about ideology. He also coined a term to identify it: agismo, the belief in the boundless power of breaking latest news. “A wrong ideology”, argues Morozov, based on the belief that “artificial intelligence will do everything better than us”.

See also  NVIDIA DLSS 3.5 Launches with Enhanced Ray Reconstruction for Improved Gaming Visuals

The Morozov interview: “The Guarantor on ChatGpt did well. The AI ​​of Silicon Valley must be opposed on a political and philosophical level” by Arcangelo Rociola 19 April 2023

This ideology, driven by the belief that AI does things better than men, would aim to replace the state with companies in various functions: public transport, health, education, security. “The real risks of the breaking latest news and its implications are of a political nature, not killer robots,” Morozov wrote. “Agism is the illegitimate child of a much broader ideology which preaches, as Margaret Thatcher memorably said, that there is no alternative to the market.” According to Morozov, AI’s ideology serves Silicon Valley neoliberalism to reaffirm its principles: “That private actors are better than public ones; that adapting to reality is better than transforming it; that efficiency should be preferred over social issues”. It is what Morozov calls “digital neoliberalism”, capable of “reconfiguring the problems of a society in a technological key, and of making a profit”. The sociologist recalls another phrase of Thatcher’s: “Society does not exist”. And the vision of those who see artificial intelligence ideologically, such as the great managers of Silicon Valley, would start from the same assumption: “This lobby believes that intelligence exists only as a product of what happens in the minds of single individuals. But in reality it is also the result of both a company’s policies and individual attitudes”.

As alarms grow, companies continue to develop AI and raise billions

For Morozov, if agism were to win, “we should be ready to see fewer policies that favor people’s intelligence”, because school and training for neoliberals “are residues of society, which for them does not exist”. On the other hand, he underlines, “the banquet has just begun: whether it is a question of fighting the next pandemic, loneliness or inflation, AI is already presented as the ideal solution to real and imaginary problems”. Democratic institutions, according to Morozov, should reject this ideology, because otherwise it would be like “entrusting the solution of society’s problems to specialized consultants, moved only by the idea of ​​efficiency aimed at profit”.

See also  The product workers: Without a team you are nothing!

The Floridi debate: “ChatGpt is brutal and does not understand. But soon AI will replace humans in many jobs” by Arcangelo Rociola 22 January 2023

But democratic institutions still have the ability to avoid this process. “Stop financing projects to secure AI, destined only to become datasets for training startup software, and finance projects for culture and education”, engines of the only real intelligence, that of men.

But breaking latest news, it has been said, does not yet exist. Nor does everyone agree on the idea that it could be the greatest threat to humanity. “We in China do not have this debate and this fear. The fear of the Apocalypse is typical of the Judeo-Christian culture. It’s in the Bible. We Chinese know that we have 5,000 years of history and that we will still be there,” said Pascale Fung. For her, a science and technology professor in Hong Kong and one of the world‘s leading scholars of these issues, artificial intelligence “is just a machine”.

Interview Does religion affect our relationship with AI? In Asia they are convinced they do by Emanuele Capone 04 July 2023

Even more radical Yann LeCun, considered one of the fathers of machine learning and head of the Ai division of Meta. For him, those who support Musk’s or Altman’s theses have a “simplistic” approach to AI. Reason? “They think that if a system is intelligent it must necessarily have human characteristics”. And who therefore will want to dominate, kill, the different from himself. “We have the desire to dominate as well as baboons and chimpanzees, because we are a social species with a hierarchical organization. But it’s not a feature of intelligence, it’s a feature of how nature has evolved us.” Meanwhile, fears or not, OpenAi and the others continue to train their algorithms. And to make trade deals. To attract billions of investments, promising a bright future after the Apocalypse.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy