Home » Big Data, medicine and privacy in the new context of technological development 4.0

Big Data, medicine and privacy in the new context of technological development 4.0

by admin
Big Data, medicine and privacy in the new context of technological development 4.0

Big data has become the watchword of medical innovation. The rapid development of these machine learning techniques and artificial intelligence, in particular, has vowed to revolutionize medical practice, from resource allocation to diagnosing complex diseases, for example. But with big data comes great risks and challenges, including important patient privacy issues. First of all, the legal and ethical challenges that big data entail for patient privacy.

(In the photo: Amedeo Leone, Federprivacy Delegate in the province of Biella)

Big data has therefore arrived in medicine and its supporters promise greater responsibility, quality, efficiency and innovation. More recently, the rapid development of machine learning and artificial intelligence techniques has promised to derive even more useful applications from big data, from resource allocation to diagnosing complex diseases.

But big data comes with great risks and challenges, including important patient privacy issues. However, attempts to reduce privacy risks also come with costs that need to be considered, both for current patients and for the system as a whole.

Let’s focus on an important concern (but not the only one) that the use of big data in the medical field brings with it: the violation of privacy. We present a basic theory of healthcare privacy and examine how privacy issues manifest themselves in two phases of the life cycle of applying big data to healthcare: the collection and subsequent use of data. While it is true that too limited privacy raises concerns, it is also true that excessive privacy in this area can create problems.

The concept of privacy is notoriously difficult to define. A currently prevailing view links privacy to its context of use. Therefore, there are contextual rules on the flow of information that depend on the actors involved, the process of accessing information, the frequency of access and the purpose of such access. When these contextual rules are violated, it is said that there has been an invasion of privacy. Such breaches can occur because access to information is by the wrong actor, because the process of accessing information is violated, because the purpose of access is inappropriate, etc. When we think about the regulatory reasons why such violations are problematic, we can divide them (with some simplification) into two categories: consequentialist concerns and ethical concerns. Two caveats are in order: First, some privacy violations raise issues in both categories. Secondly, some concerns we discuss are also present for the collection of “small data”. Big data, however, tends to increase the number of people affected, the severity of the effects, and the difficulty for injured individuals to take preventive or self-help measures.

See also  A group of dogs listened to "The Little Prince" in different languages. The result surprised the scholars

Consequentialist Concerns – Consequentialist concerns arise from the negative consequences that affect the person whose privacy has been violated. These can be tangible negative consequences – for example, your long-term care insurance premium increases due to the additional information now available as a result of the breach of privacy, you suffer discrimination at work, your HIV status becomes known to those in their social circle, etc.

Deontological concerns – Ethical concerns do not depend on suffering negative consequences. In this category, concern about an invasion of privacy also arises if no one uses a person’s information against them or if the person never learns of a breach. You can be wronged by an invasion of privacy even if you have not been harmed. For example, suppose an organization unscrupulously or inadvertently accesses data stored on your smartphone within a larger data network. After reviewing them, including photos of an embarrassing personal disorder, the organization realizes that the data is worthless and destroys it. You will never find out what happened. Whoever examines your data lives abroad and will never meet you or anyone who knows you. It’s hard to say that you have been harmed in a consequential sense, but many think that the loss of control over your data, the invasion, is inherently ethically problematic even without harms. This is an ethical concern.

A reaction to the violations of health privacy described, both as regards the ethical and the consequentialist one, is to drastically limit access to patient data. Especially if ethical and consequential concerns are difficult to reduce ex post, decreasing access to data ex ante seems an attractive solution. According to this approach, perhaps data sharing should be limited to the minimum amount needed in all contexts, data should only be kept for a limited period of time, or data should be intentionally obfuscated, if the resulting damage is difficult to limit. However, we argue that data access limits can lead to their damage.

See also  Schillaci, 'Important measures for healthy aging approved' - Healthcare

The legal and ethical challenges that big data pose to patient privacy are great

Privacy protections limit both the aggregation of data, and the creation of longitudinal records or the collection of data from different sources at the same time, and the innovative use of data. To give an immediate example, de-identifying data is a common way, but de-identified data is much more difficult to link together when a patient goes to different providers, obtains insurance through different payers over the course of the time or move from one state to another. The fragmentation and fragmentation of health data make data-driven innovation difficult, imposing both technological and economic barriers.

Some approaches can protect privacy while minimizing innovation costs and should be pursued. In some contexts, researchers may use techniques that involve data pseudonymisation or differential privacy rather than data identification. Privacy controls can ensure proper use and security standards should protect against unauthorized use. Data owners should be data managers, not privacy-agnostic intermediaries. But in many contexts there will still be a trade-off between privacy and innovation.

Privacy also problematically interacts with secrecy. As described above, there are many potential innovations that can arise from the data and some of these can be very profitable, such as an algorithm that accurately selects cancer drugs. Innovators have an incentive to keep data secret to maintain a competitive advantage in developing and disseminating such valuable innovations. However, we may prefer, as a company, to have access to the data on which these innovations are based: others may use that data to build better predictors from the same data, to aggregate the data to find more subtle patterns, or to validate and verify the accuracy of the original innovator’s research.

See also  Artificial intelligence in the personnel office? How to avoid discrimination

Secrecy justified by privacy can erode trust in the already opaque innovations of big data. When big data produces amazing insights into how to deliver care, providers and patients need to trust the results to put them into action. This already creates challenges when insights come from explicit big data analytics; when machine learning and opaque algorithms are involved, trust can be even more difficult to establish. To the extent that data and algorithms are kept secret under a potentially deceptive veil of privacy protection, providers and patients will have even less reason to trust the results. To be sure, there are many medical processes whose inner workings are shrouded in trade secrecy and very opaque to patients, but the attention of the media and the novelty of big data and artificial intelligence can make patients particularly nervous about the their integration into care.

On the other hand, if privacy-concerned patients refuse to participate in a data-driven system, these algorithms may not even be developed. Finding the right balance – protecting privacy so that patients are comfortable providing their data, but not allowing privacy to lead to secrecy that reduces validation and trust in the potential benefits of that data – will be a tough challenge for proponents of big data, machine learning and health learning systems. Also, the response will not be uniform. The future of big data privacy will be sensitive to the source of the data, the custodian of the data and the type of data, as well as the importance of triangulating data from multiple sources. But it’s important not to assume that privacy maximalism is the way forward. Underprotection and overprotection of privacy create recognizable harm for patients of today and tomorrow.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy