Home » High in artificial intelligence to set ethical rules

High in artificial intelligence to set ethical rules

by admin
High in artificial intelligence to set ethical rules

AFTER the request of more than 1,000 technology professionals for a pause in the development of artificial intelligence (AI), including the new Chat GPT language model, UNESCO called at the end of the week for the immediate implementation of the ethical framework overall on this matter.

“The world needs ethical rules for artificial intelligence, it is the challenge of our time. The UNESCO Recommendation on the ethics of AI establishes the appropriate regulatory framework”, said Audrey Azoulay, director general of that United Nations Educational, Scientific and Cultural Organization.

UNESCO maintains that this global regulatory framework, adopted unanimously by the 193 Member States in November 2021, provides the necessary guarantees.

“It is now urgent that everyone translate this framework into national strategies and regulations. We must translate commitments into action,” Azoulay said.

The Recommendation is described as the first global regulatory framework for the ethical use of AI, as a roadmap for countries, outlining how to amplify the benefits and reduce the risks of this technology.

It includes policy actions in ten specific areas: Ethical Impact Assessment, Ethical Governance and Stewardship, Data Policy, Development and International Cooperation, Environment and Ecosystems, Gender, Culture, Education and Research, Economy and Work, and Health and Social Welfare.

The Recommendation calls for action beyond what technology companies and governments do to ensure people have greater protection, transparency, ability to act and control over their personal data.

It states that all individuals should be able to access their personal data records or even delete them. It also includes actions to improve data protection and the knowledge and right of the individual to control their own data.

See also  Treviso, dizzying bills: "We will lower the heating in schools"

In addition, it increases the ability of regulatory bodies around the world to enforce it.

Second, it explicitly prohibits the use of AI systems for social ratings and mass surveillance.

It highlights that this type of technology is very invasive, violates human rights and fundamental freedoms, and is widely used.

The Recommendation stresses that, when developing regulatory frameworks, States bear in mind that ultimate responsibility and accountability should always rest with humans, and that AI technologies should not be given legal personality per se. same.

It also lays the foundation for tools to help countries and companies assess the impact of such systems on people, society and the environment, and encourages States to consider adding an independent AI ethics officer or other mechanism. follow-up.

It stresses that AI actors should favor data, energy and resource efficient methods that help ensure that AI becomes a leading tool in the fight against climate change, and in addressing environmental issues.

It may interest you: This is the story of the Tren de Aragua, a Venezuelan multinational of crime.

This is the story of the “Tren de Aragua”, the Venezuelan multinational crime

Calls on governments to assess the direct and indirect environmental impact throughout the life cycle of the AI ​​system, including its carbon footprint, energy consumption, and the environmental impact of extracting raw materials to support manufacturing of technologies.

UNESCO declared itself particularly concerned about the ethical issues raised by these innovations in the fields of combating discrimination and stereotypes, including gender issues, the reliability of information, privacy and data protection, rights human and environmental.

See also  Electrolux will continue to invest in Italy, but checks are underway

In addition, it considers that self-regulation of the industry is not enough to avoid this ethical damage, and advocates establishing standards so that, when damage occurs, there are accountability and redress mechanisms that are easy to request by the people concerned. . .

More than 40 countries from all regions of the world are already working with UNESCO to develop these AI safeguards based on the Recommendation.

The new call from Unesco came two days after a thousand new technology professionals and entrepreneurs signed a call for a six-month pause in research on AI even more powerful than ChatGPT 4, the model recently launched by the American firm OpenAI.

The signatories warned of “great risks to humanity” in the new model, and advocated for “security systems with new regulatory authorities, surveillance of AI systems, and techniques that help distinguish between the real and the artificial.”

They also defended that there are “institutions capable of coping with the dramatic economic and political disruption (especially for democracy) that AI will cause”, without adequate controls.

Among the signatories of the petition were the magnate of the Space X, Tesla and Twitter firms, Elon Musk; the co-founder of the giant Apple, Steve Wozniak, and the Israeli writer and historian Yuval Noah Hariri.

expert alert

Elon Musk and hundreds of world experts signed a call on Wednesday for a six-month pause on research into artificial intelligences (AIs) more powerful than ChatGPT 4, the OpenAI model released this month, warning of “great risks to the humanity”.

In the petition posted on the futureoflife.org site, they call for a moratorium until security systems are established with new regulatory authorities, surveillance of AI systems, techniques that help distinguish between the real and the artificial, and institutions capable of doing in the face of the “dramatic economic and political disruption (especially for democracy) that AI will cause.”

See also  Covid, the bulletin of May 31: in Sicily 258 new cases and 8 deaths

It is signed by personalities who have expressed fears about an uncontrollable AI surpassing humans, including Musk, the owner of Twitter and founder of SpaceX and Tesla, and historian Yuval Noah Hariri.

The director of Open AI, who designed ChatGPT, Sam Altman, has acknowledged being “a little afraid” that his creation will be used for “large-scale disinformation or cyber attacks.”

“The company needs time to adjust,” he recently told ABCNews.

“In recent months we have seen AI labs launch into a headlong race to develop and deploy increasingly powerful digital brains that no one, not even their creators, can reliably understand, predict, or control,” they say.

“Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including rewarding ones? (…) Should we risk losing control of our civilization? These decisions should not be delegated to unelected tech leaders,” they concluded.

Signatories include Apple co-founder Steve Wozniak, members of Google’s DeepMind AI lab, Stability AI director Emad Mostaque, as well as American AI experts and academics and executive engineers from OpenAI partner Microsoft.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy