Home » From manipulation to control: artificial intelligence in search of rules

From manipulation to control: artificial intelligence in search of rules

by admin

The control of emotions by AI systems, however, can lead to the behavioral manipulation of children, the elderly or other consumers, exploiting their cognitive vulnerabilities and inducing them to unwanted commercial choices. At the same time, these automated systems are often opaque and therefore difficult to contest.

It is clear that this is a global challenge and it would make little sense to face it within narrow national borders. This is why both the European Union and the United States are moving – in parallel, but with very different approaches – in the direction of regulating “high-risk” AI applications.

Last April the European Commission published a new proposal for the regulation of artificial intelligence, which will now have to be discussed and approved by the Council and the European Parliament. The proposal introduces rules, proportional to the level of risk, for AI-based products and services. While not providing for any new individual rights for consumers / citizens, the regulation proposes ambitious objectives of fairness, safety and transparency for AI applications.

The framework proposed by the Commission is based on different levels of risk: some AI systems are considered intolerable risk and are therefore prohibited. These are online cognitive manipulation practices that cause physical or psychological harm or exploit vulnerability due to age or disability. Citizen social assessment systems that can produce disproportionate or out-of-context harmful effects, as well as facial identification systems used indiscriminately by law enforcement in open places, are also prohibited.

Other AI systems are considered to be high-risk (including facial recognition, or AI used in critical infrastructure, in education, worker assessment, emergency, social care, credit assessment, or AI used by forces order, border police and courts). The suppliers and users of these AI systems will have to carry out a risk assessment and ensure human surveillance in the development of the algorithm to ensure its modifiability and comprehensibility. They will also have to prepare an adequate data management plan, filling any gaps, preventing any prejudice (“bias”) intrinsic to the algorithm and contextualizing the AI ​​system in the specific socio-geographical reality in which it is intended to be used.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy