Home » Artificial intelligence needs to be monitored in medical devices

Artificial intelligence needs to be monitored in medical devices

by admin

A group of Stanford researchers has analyzed the evaluation procedures to which artificial intelligence algorithms are subjected in the medical field: in many cases they are not adequately tested with respect to the dangers they could entail

(Pixabay)

After years of incubation during which it was difficult to predict even what the future ofartificial intelligence between surgeries and hospital wards, today machine learning and neural networks have found one multiplicity of outlets in a large number of practical applications and medical devices. With the help of AI, X-ray machines, ultrasound, CT scan, ecg and several other instruments are gaining an edge in identifying relevant clinical conditions in patients that risk escaping the naked eye instead.

However, if on the one hand the trust placed in these technologies can only do good for the progress of medicine, on the other hand it is also necessary to regulate the latter in a more stringent way than what is happening now. The warning comes from a group of Stanford researchers, who they analyzed the evaluation procedures to which the artificial intelligence algorithms used in the medical field are subjected, concluding that in many cases these systems they are not adequately tested compared to the dangers that their extensive and widespread use could entail.

The alarm of the researchers

Before being used for diagnosis and treatments, devices that work using artificial intelligence algorithms are subjected to months of checks aimed at verify its effectiveness and above all the absence of potential adverse effects. According to the study published by the Stanford researchers, the procedures implemented by the US FDA in this regard, however, have some flaws that should not be overlooked.

See also  In the mind of a teenager

Many devices for example have been tested using only historical patient data, while only a small part has been tested alongside real doctors, supporting them in their decisions. In other cases the evaluations took place in one or two unique locations, that is, with enormous limits on the sample of patients with whom the devices have had to deal to prove their value.

Partial evaluations

Ai’s algorithms never work independently, but always under the supervision of a specialist; evaluating them on the basis of historical data, however, compares them with already definitive diagnoses, and prevents us from understanding their potential for functioning within the workflow of a doctor. Understanding whether a predictive algorithm can negatively influence the opinion of a specialist or if it frequently risks being used incorrectly is a fundamental aspect of its safety, which, however, according to the Stanford researchers is not adequately taken into account in the evaluation studies.

The risk of bias

The other problem identified by the researchers arises from a risk common to all Ai algorithms, namely that of silently perpetrate the biases already present in the data with whom they have been trained. Several face and voice recognition software proved less accurate towards black people and other less privileged communities just because the data used to shape its functioning represented these individuals disproportionately to the total; in the medical field the risk is the same, and the only way to prevent it is stay alert on the topic. However, when evaluations on these software are conducted in just one or two hospitals it becomes difficult to be certain that work adequately everywhere in the world.

See also  Los Angeles, Bishop O'Connell shot dead - Corriere della Sera

Tighter controls now, or the knots will come to a head

The full study was published in Nature and includes other observations that point to a single conclusion: the AI ​​hides a huge potential that however, it must be known how to harness. That is, it is necessary to review the criteria used to evaluate the products of this technological breakthrough; otherwise the risk is that the problems come to a head too late, discrediting an entire sector that has what it takes to change the history of medicine.

Above all, we need to do all this now that the phenomenon is gaining speed: out of a total of over 130 devices equipped with AI algorithms approved by the FDA for use in the medical field, and about half have been given the green light only in the last year. In short, the enthusiasm around the entire sector is very palpable, but you have to ride it without being overwhelmed.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy