Home » EU squeeze on artificial intelligence, towards a ban on facial recognition

EU squeeze on artificial intelligence, towards a ban on facial recognition

by admin

The Commission, the executive arm of the EU, unveiled on April 21 a proposal for a regulation to renew and harmonize European rules on artificial intelligence. The objective, a priority on the agenda of the executive von der Leyen, is to combat the uses of technology that may be harmful to the “fundamental rights and security” of EU citizens. Among the applications that should be banned are those capable of “manipulating people through subliminal techniques beyond their conscience” or that exploit the vulnerabilities of particularly fragile groups, such as children or people with disabilities.

But the squeeze will also include systems of social scoring, the “scores” assigned by governments such as China to assess the reliability of citizens, and facial recognition systems: biometric recognition technologies should be prohibited, with the sole exception of emergency cases such as the search for victims of kidnappings, the fight against terrorist activity or the investigation of criminals. Offenders, according to the Commission’s text, could incur administrative fines of up to 30 million euros or, in the case of companies, fines of up to 6% of their total turnover.

Loading…

The levels of risk envisioned by the Commission

In its proposal, the Commission identifies different risk levels for artificial intelligence technologies. The “unacceptable” level of risk includes those that can represent an obvious threat to the safety or rights of people, as in the cases already mentioned of manipulation tools (for example, toys with voice assistants that incite or can incite dangerous behaviors minors) or “scoring” to citizens (systems developed with AI that allow governments to identify and classify citizens based on certain characteristics). The “high-risk” technologies range from software for the recruitment of workers to credit evaluation systems, including more generally the systems that can harm an individual right. Their use is possible and not prohibited short, as in the case of the “unacceptable” level, provided that they undergo a very rigorous assessment and guarantee a series of protection for citizens. This is the case of facial recognition systems, which are prohibited in “principle” but can be activated for a very small minority of emergencies and only with the green light of a judicial body. Among the technologies considered to be low-risk, the common chatbots stand out, the voice assistants used above all in the services of customer care, while among those at minimum risk there are anti-spam filters or video games developed with AI systems.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy