Home » AI in medicine collides with race bias

AI in medicine collides with race bias

by admin
AI in medicine collides with race bias

The topic of racial disparities is not new when it comes to artificial intelligence. The latter is used in various healthcare settings, from analyzing medical images to assisting in surgical procedures. While AI can sometimes surpass skilled doctors, its capabilities aren’t always put in the service of parity. For example, last year, a study published in Lancet Digital Health claimed that AI models could accurately predict an individual’s race in different types of X-ray images, a task not possible for human experts. A warning to those who, perhaps not entirely wrongly, think of a future in which software, beyond what they “see”, can obtain further information on individuals, to classify them and create categorical models without their consent. A scenario, the one feared by the Lancet, capable of potentially exacerbating racial disparities in the medical field.

The examples of bias in natural language processing are endless. MIT scientists, using both a public and private data set, confirmed the above. Using imaging data from chest X-rays, extremity X-rays, chest CT scans and mammograms, the team trained a deep learning model to identify the patients’ race as White, Black or Asian, even if the images themselves contained no explicit mention of where they came from. As a recent Science article reminds us, AI can predict people’s age and gender based simply on the response of ultrasounds during a heart exam. The heartbeat varies from age to age, and based on gender, therefore creating a rather precise range of information of this kind is simple, almost trivial for a software that processes millions of data points per minute (and which does not understand the ethical implications of racial disparities at all). The question is not so much technical as it is ethical: will we one day get to interface with a doctor made of bits rather than flesh and blood who will tell us how to get by? Probably yes, but the critical issues, if we want to define them that way, go even further.

See also  Measured Samsung Galaxy S23 Ultra vs iPhone 14 Pro Max (1) - night shots get brighter - Life

(Im)partial clusters accentuate racial disparities

Once an artificial intelligence platform has created clusters, based on race, of patients of the same age group, gender, social background, what will prevent insurance companies from creating ad-hoc, ameliorative or pejorative offers, depending on the case? In summary: how can we avoid racial and social disparities in the merits of healthcare, medical care and social security?

“The ability of AI to predict health variables for a race – explain James Zou, Judy Wawira Gichoya, Daniel E. Ho and Ziad Obermeyer in Science – starting from medical images can be used to create disparities in the health system”.

The biases are then around the corner. Without looking back too much, just think about what happened with Covid. The MRIs of patients hospitalized for bilateral pneumonia showed the clear signs of the disease on their bodies. If one day we fed all these findings to an AI, it would be able to create categories of subjects more exposed to the risk of Covid pneumonia than others. Taking into account what other variables? If apparently secondary factors such as: smoking, previous problems, familiarity, and other decisive metrics were not included, there would be no “intelligence” that holds but only a bunch of information cataloged in a good way. Predicting something based on race is not using data in their holistic form, with an eye to detail and one to the context in which they are inserted, but only as a mere “game” of systemic aggregation, with little value. In the United States, where the application of technology in the medical field runs much faster than in Italy, something is moving, albeit slowly, with the aim of better regulating the implications of AI and containing (and avoiding) racial disparities and other prejudices and inequities.

The US is on the move

Civil rights groups convinced the White House to update the race reporting standard that dated back to 1997 to disaggregate data by subgroups (eg, Vietnamese Americans, Asian Americans). It could take years for that process to materialize in health data, and in the meantime AI imputations have the potential to increase racial disparities between granular and more proximate subgroups.

See also  System overloaded?These Windows programs and applications are harmless to delete - Computer King Ada

It must be said that, to date, racial variables are not a determined element in medicine. But they could become, especially as tools like generative AI become more available to the general public. Understanding which characteristics AI uses mechanically to predict race variables will then be important in making the data and algorithms unbiased. The commitment of the man will also be needed, who will have to reduce the prejudices in the way in which he “uses” the data that he has in front of him in correlation to the race of the patient that he is dealing with. A potentially more challenging task than reducing biases in the algorithms themselves.

breaking latest news © (Article protected by copyright)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy