Home » Cures and artificial intelligence: privacy and the risk of the algorithm that discriminates

Cures and artificial intelligence: privacy and the risk of the algorithm that discriminates

by admin
Cures and artificial intelligence: privacy and the risk of the algorithm that discriminates

Listen to the audio version of the article

Artificial intelligence is one instrument increasingly used in treatment: from health technologies (think of CT scans and MRIs) to predictive medicine or the health policy choices of hospitals and governments based on data and algorithms. This is a road from which there is no turning back, which is why the Privacy Guarantor has just launched a set of rules for the creation of health services at a national level through artificial intelligence systems. Among the guarantor’s requests are transparency, human supervision of decisions and the need to avoid discrimination on treatments due to algorithms.

Maximum transparency: the citizen has the right to know

As mentioned, the Privacy Guarantor has drawn up a decalogue that goes from the legal bases of the processing to the roles, from the impact assessment on data protection to the correctness, integrity and confidentiality of the data themselves. But there are fundamentally three key principles identified by the Authority on the basis of the privacy regulation and in light of the jurisprudence of the Council of State: transparency of decision-making processes, automated decisions supervised by humans and algorithmic non-discrimination. Based on the indications of the Guarantor, the patient must have the right to know, also through communication campaigns, whether there are and what the decision-making processes are (for example, in the clinical or health policy field) based on automated treatments carried out through AI tools and to receive clear information on the logic used to arrive at those decisions. The decision-making process must include human supervision that allows healthcare personnel to check, validate or deny the processing carried out by the Artificial Intelligence tools.

See also  Progressive ossifying fibrodysplasia, this is how the disease behaves

The risk of potential discriminatory effects of algorithms

It is appropriate, warns the Guarantor, that the data controller uses reliable AI systems that reduce errors due to technological or human causes and periodically verifies their effectiveness, implementing adequate technical and organizational measures. The precautions are also necessary in order to mitigate potential discriminatory effects that the processing of inaccurate or incomplete data could have on the person’s health. An example is the American case, referred to by the Guarantor in the Decalogue, concerning an Artificial Intelligence system used to estimate the health risk of over 200 million Americans. The algorithms tended to assign a lower level of risk to African American patients with the same health conditions, due to the metric used, based on the average individual healthcare expenditure which was less high for the African American population, with the consequence of denying the latter access to adequate care. Not only that: outdated or inaccurate data, the Authority underlines, could also influence the effectiveness and correctness of the services that the AI ​​systems intend to create.

The legal bases for using AI

Particular attention was paid by the Guarantor to the suitability of the legal basis for the use of artificial intelligence. The processing of health data through AI techniques, carried out for reasons of public interest in the healthcare sector, must be provided for by a specific regulatory framework, which identifies adequate measures to protect the rights, freedoms and legitimate interests of the interested parties. In compliance with the regulatory framework of the sector, the Guarantor has also underlined the need for an impact assessment to be carried out before processing health data using national AI systems in order to identify suitable measures to protect the rights and patients’ freedoms and guarantee compliance with the principles of the EU Regulation on privacy. In fact, a centralized national system that uses AI determines a large-scale systematic processing of health data that falls within “high risk” data, for which the impact assessment is mandatory and must be carried out at central level to allow a overall examination of the adequacy and homogeneity of the measures adopted.

See also  The Texas Chain Saw Massacre will be included in Xbox Game Pass - The Texas Chain Saw Massacre

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy