Home » European Parliament gives green light to the AI ​​Act

European Parliament gives green light to the AI ​​Act

by admin
European Parliament gives green light to the AI ​​Act

With 523 votes in favor, 46 against and 49 abstentions, today the European Parliament gave the green light to the Artificial Intelligence Act, the new European regulation on AI. It is a historic vote: the 27 countries of the European Union will be the first in the world to have a general law on artificial intelligence, beating the other major global powers to the punch.

Three years of work

It all started in April 2021 with the Commission proposal, followed by the positions of the Council (December 2022) and Parliament (June 2023). Political agreement on the text was reached on December 9th and finishing work began immediately afterwards in view of the final votes. That of Parliament arrived today, although the formal adoption will take place in the plenary session in April. The vote was brought forward by a month, following an accelerated procedure, given the imminent electoral round. However, the objective was achieved quickly, with a text now made up of 113 articles and 12 annexes.

Artificial intelligence “Europe’s role in AI is pathetic. Thus it will become a colony of the USA and China.” Interview with Alec Ross by Arcangelo Rociola 27 February 2024

To what and to whom does the new law apply?

The AI ​​Act regulates the development, provision and use of AI systems across Europe. The definition of “AI system” is that proposed by the OECD: simpler traditional software and programming approaches are therefore excluded. There will also be Commission guidelines on this point.

The new rules affect all companies and public bodies that provide or use AI systems in Europe. This also applies to those who are not based in a European country, provided that the system output is used in the EU. The law also obliges other subjects, such as importers and distributors.

See also  Value eXchange Forum 2023: Ingram Micro puts digital transformation, innovation and education at the center

However, the regulation does not apply to AI systems for military, defense or national security purposes, for scientific research purposes and to those released under free and open source licenses (except for risk). Also excluded are AI research, testing and development activities and non-professional personal use by individuals.

The risk-based approach and prohibited systems

The law classifies AI systems based on the risk that could arise from their use, gradating requirements and obligations accordingly. In other words, the greater the risk, the greater the protection measures imposed by the AI ​​Act. Predictive policing and social scoring systems, emotion recognition in schools and at work are uses at unacceptable risk, and therefore prohibited. and scraping facial images from the internet to create databases. The use of real-time biometric identification systems in spaces accessible to the public is also prohibited, with some exceptions in predetermined cases and with authorization.

High risk systems

There are many mandatory rules and procedures for AI which can have a negative impact on health, safety or fundamental rights. These are high-risk systems, for example, AI to manage road traffic, to screen students for exams or to analyze CVs and evaluate candidates.

Before being put on the market, these systems must undergo a conformity assessment to demonstrate compliance with the requirements of the law, such as risk management, data quality, technical documentation and log recording. There are also transparency, cybersecurity and human surveillance measures. In some cases, a fundamental rights impact assessment must also be carried out. These systems must also have the CE marking and be registered in a European database.

See also  Diablo IV and its slow agony without a noteworthy endgame and frustrating drops

Ai Act Green light from European countries to the rules on artificial intelligence by Arcangelo Rociola 02 February 2024

Transparency obligations and general purpose AI

The AI ​​Act introduces a series of measures to promote the knowledge and transparency of algorithms. In the case of chatbots and systems that interact with people, the latter must know that they are relating to a machine. Images, text and other output from a generative AI must be marked in a machine-readable format and detectable as artificial, just as deep fakes must be indicated as having been created by an AI.

Then there are specific requirements for general purpose AI models, i.e. algorithms trained with large amounts of data and capable of carrying out a wide range of tasks. These include writing technical documentation, implementing policies to respect copyright, and publishing reports on the content used to train the algorithm. Some additional requirements apply to systemic risk models to ensure constant control.

Measures for innovation, governance and sanctions

The AI ​​Act also winks at progress, with a series of rules that facilitate experimentation and adaptation. These include regulatory sandboxes, real-world testing and codes of conduct, as well as a range of benefits for SMEs and startups. On the governance side, each country will have a national supervisory authority, to which citizens and businesses can turn. At EU level there will be several parties involved, including the Commission, the European Committee for Artificial Intelligence and the Office for AI (established at the end of January). There will then be a consultative forum and a group of independent scientific experts. Heavy fines for those who violate the regulation: up to 35 million euros or, for companies, up to 7% of the total annual worldwide turnover of the previous financial year, whichever is higher.

See also  Lack of freedom and creativity keeps developers away from big publishers - Immortals of Aveum

The next steps

After formal approval by Parliament and Council and publication in the Official Journal, the AI ​​Act will officially come into force (expected at the end of May), but application will be gradual. Some rules will become operational after 6 months (prohibited AI practices) and one year (general purpose AI), others after 36 months (some high-risk AI systems): most of the law will become applicable after 24 months.

The reactions

With the definition of clear rules on the transparency of the sources used for the training of algorithms and with the obligation of access registers for rights holders, the European regulation on AI confirms itself as a model in the field of protection of the right to author while ensuring the music industry and artists can harness this technological innovation for new creative challenges. Enzo Mazza, CEO of FIMI, present today in Strasbourg on the occasion of the vote, commented: “A historic step that shows once again, after the Copyright Directive, how Europe is at the forefront in regulating innovation, avoiding far west”.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy