Home » the CEO of OpenAI also affirms it

the CEO of OpenAI also affirms it

by admin
the CEO of OpenAI also affirms it

Artificial Intelligence (AI) has taken over our society, but industry experts are sounding a warning. Indeed, the uncontrolled and irresponsible use of AI could lead to the extinction of humanity. Recognized names like OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei unite in a call to address the imminent danger and mitigate the risks associated with AI. Here’s what their concerns are and why it’s important to make AI a global priority.

The risks of Artificial Intelligence

Among the experts who raised the alarm there is also Sam Altman, CEO of OpenAI. According to him and other experts, the danger lies not so much in a superintelligence dominating humanity, but rather in the consequences of the irresponsible use of algorithms in work and in daily life. One of the main risks concerns the interference in the dissemination of false news and the manipulation of public opinion. In fact Artificial Intelligence can harm humanity through the creation of channels of disinformation.

The signatories of the appeal underline the importance of preparing to face these emerging risks. The extensive use of AI is leading to a revolution in many sectors, but at the same time it poses serious problems. AI permeates different aspects of social, economic, financial, political, educational and ethical life. Experts agree that such situations need to be managed and steps need to be taken to address the challenges presented by AI, such as producing fake news or the control of autonomous cars.

The call for global priority

Altman, Hassabis and Amodei recently met with US President Joe Biden and Vice President Kamala Harris to discuss Artificial Intelligence. After the meeting, Altman testified before the Senate, warning that the risks associated with advanced AI were serious enough to warrant government intervention. He argues that these risks require precise regulation to prevent any damage. However, experts have not only warned of the dangers of the technology, but have also proposed concrete solutions for the responsible management of advanced AI systems.

See also  Are replicas popular now? ASUS launches 30 series cooling module TUF 4090 OG OC Edition

Artificial Intelligence experts warn that humanity’s risk of extinction should be considered a global priority. AI has the potential to significantly affect the fate of humanity, so it is essential to address these risks urgently. In a short letter published by the Center for AI Safety (Cais), we read

“Mitigating the extinction risk posed by artificial intelligence should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”

Among the signatories of the letter are Sam Altman, CEO of OpenAI, also known for the creation of ChatGPT, along with Geoffrey Hinton and Yoshua Bengio, considered pioneers in the field of AI. Geoffrey Hinton is commonly regarded as the “godfather of AI”, currently describing this technology as “scary”. Yoshua Bengio, on the other hand, is a professor of computer science at the University of Montreal and one of the leading experts in the sector who has already expressed concerns about the risks associated with this technology.

artificial intelligence

The point of view of Sam Altman, CEO of OpenAI

Sam Altman is the CEO of OpenAI, the company responsible for creating ChatGPT, the famous chatbot that has catalyzed public interest in artificial intelligence. Along with Demis Hassabis, CEO of Google DeepMind, and Dario Amodei of Anthropic, Altman prophesied humanity’s risk of extinction. But what is the meaning of this statement? The vote of the European Parliament on the AI ​​Act, the world‘s first regulation on artificial intelligence, will take place from 12 to 15 June. It is interesting to note that right now, in which an important institution is preparing to limit the freedom of action and economic developments related to artificial intelligence, Altman speaks out against his own technology. This may seem like a real paradox. At first glance, this position seems nonsensical. Why this unexpected statement?

See also  This is "Turtwig", the ancient species of turtle found in Colombia and named in honor of Pokémon

Altman’s motivations

There are several possible theories for understanding Altman’s position. One possible explanation is that calling AI all-powerful is great advertising for the industry as a whole. The AI ​​sector is in full swing and limiting it seems to be a difficult, if not impossible challenge. Another explanation could be economic in nature. In fact, the data shows that the race for Artificial Intelligence mainly involves two nations, China and the United States. The former correspond to private investments of 13.4 billion dollars, the latter a total of 47 billion dollars. Altman’s move could be aimed at limiting dangerous competition and limiting the reach of AI at least in Europe and the United States. Basically, complex power games are hidden behind such a statement. It is expected that $800 billion will be invested in AI in the future, generating an estimated value of around $6 trillion.

The responsible management of Artificial Intelligence

Experts propose several strategies to responsibly manage AI. They stress the need for cooperation between industry players in the sector and for more research into language patterns. They also suggest creating an international AI security organization similar to the International Atomic Energy Agency (IAEA). Furthermore, some argue the importance of formulating laws requiring creators of advanced AI models to register and obtain a government-regulated license.

The widespread diffusion of generative AI, with the proliferation of chatbots such as ChatGPT, has prompted many requests to evaluate the implications of developing such tools. Among these, an open letter also signed by Elon Musk last March raised the question of a six-month break in the development of models more powerful than OpenAI’s GPT-4. The aim is to spend the time necessary to develop shared security protocols for advanced artificial intelligence. In fact, as stated in the letter,

“Powerful AI systems should only be developed when there is confidence that their effects will be positive and their risks manageable.”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy