Home » ChatGTP “surpasses” doctors in providing empathetic (and high-quality) advice to patient questions – breaking latest news

ChatGTP “surpasses” doctors in providing empathetic (and high-quality) advice to patient questions – breaking latest news

by admin
Of Ruggiero Corcella

This is suggested by a new study published in Jama Internal Medicine. They are needed for further research to understand if and how to integrate these tools in the health field

ChatGpt really the star of the moment. Between sensationalism and strong fears, the best known of the chatbots – relational artificial intelligence software capable of simulating and processing human conversations – is trying to gain a prominent place in the medical field as well. In the United States, Microsoft (which has invested in OpenAi, the developer of ChatGpt) has just signed an agreement with Epic Systems, one of the largest healthcare software companies in America, to test the chatbot in some hospitals (Uc San Diego Health, UW Health in Madison Wisconsin and Stanford Health Care). And now a new study published in Jama Internal Medicine found that ChatGTP even outperforms physicians in providing high-quality, empathic advice to patient questions. The same authors, however, admit a series of limitations in the research and invite to deepen the topic through randomized clinical trials.

The comparison: the answers of the doctors and those of ChatGPT

The study, conducted by John W. Ayers del Qualcomm Institute from the University of California San Diego, gives us a glimpse into the role artificial intelligence assistants could play in medicine. The research compared written responses from doctors and ChatGPT with real-world health questions. A panel of healthcare professionals preferred ChatGPT responses 79% of the time and rated them as higher quality and more empathetic. The opportunities for improving health care with AI are enormous, said Ayers, who is also the deputy chief of innovation in the Division of Infectious Diseases and Global Public Health at UC San Diego School of Medicine. Health care augmented by artificial intelligence is the future of medicine.

ChatGPT Ready for Healthcare?

In the new study, the research team set out to answer the question: Can ChatGPT accurately answer the questions patients send their doctors? If so, the idea that AI models could be integrated into healthcare systems to improve physician responses to patient-submitted questions and ease the ever-increasing burden on physicians. ChatGPT may be able to pass a bar exam, says Davey Smith, study co-author and co-director of the UC San Diego Altman Clinical and Translational Research Institute and professor at the UC San Diego School of Medicine, but answering patients’ questions directly in an accurate and empathic way is another thing altogether.

See also  DRF Luftrettung celebrates its anniversary year in Greifswald / Thousands of visitors on the day of ...
Research design

To obtain a large and diverse sample of health questions and medical answers that did not contain personally identifiable information, the team turned to social media where millions of patients publicly post medical questions that doctors answer: Reddit’s AskDocs. r/AskDocs a sub-channel of the social news site with approximately 452,000 members posting medical questions answered by qualified healthcare professionals. Although anyone can answer a question, moderators check the credentials of the healthcare professionals and the answers show the level of credentials of the respondent.

The role of social media

The team randomly sampled cos 195 exchanges from AskDocs in which a qualified physician answered a public question. The team provided the original question to ChatGPT and asked it to create an answer. A group of three licensed healthcare professionals he evaluated each question and the corresponding answers and did not know whether the answer came from a doctor or from ChatGPT.

They compared responses based on information quality and empathy, noting which they preferred. The healthcare professional evaluators panel preferred ChatGPT responses to physician responses 79% of the time. Furthermore, ChatGPT responses were rated significantly higher in quality than physician responses: Good or very good quality responses were 3.6 times higher for ChatGPT than for doctors (doctors 22.1% vs. ChatGPT 78.5%). The answers were also more empathic: the empathetic or very empathic answers were 9.8 times higher for ChatGPT than for doctors (doctors 4.6% against ChatGPT 45.1%).

The limits of the study

The first limitation is precisely this,” he comments Professor Sergio Pillon Vice-President AiSDeT, Italian Association of Digital Health and Telemedicine —: were compared a chat tool made by artificial intelligence with a chat made by doctors. as if a patient were talking to their doctor via WhatsApp: It is obvious that the doctor tends to give short answers. The doctor tends to solve, not to give the best answer, to be efficient and to treat above all in chat. The authors themselves emphasize how they are further research is needed first draw definitive conclusions regarding the potential effect in the clinical setting.

See also  Influenza, Bassetti: "Risk of infernal trio with Covid and syncytial virus"

Without forgetting the ethical concerns that need to be addressed prior to implementing these technologies, including the need for human review of AI-generated content, both for accuracy and for potential false or fabricated information. According to Professor Pillon, the optimal use of a chatbot like ChatGPT could be within a search engine.

Future scenarios

Despite the limitations of this research and the frequent sensationalism around new technologies, according to Elena Giovanna Bignamifull professor of Anesthesia and Intensive Care at the University of Parma and expert in Artificial Intelligence of the Italian Society of Anesthesia, Analgesia, Resuscitation and Intensive Care (Siaarti), the study an interesting trailblazer, because it helps to identify the variables to focus on more in the future.

A fundamental point, if you really want to extend the doctor-patient relationship to “third parties”, such as virtual assistants for example, will be identify the right patients, i.e. those able to interact with these new technologies. In fact, patients or assisted persons will need to be trained in interacting with the machine, a bit like when you go online and find everything and its opposite. The patient, before interfacing with the platform, must be instructed on its use so that misunderstandings do not occur, a concept that is also true with flesh-and-blood assistants. It is often easy for a doctor or nurse to tell the patient what to do at home, what time to take a tablet or the position in which to sleep, but it is not always so immediate that the patient (tired and emotionally involved) fully understands what to do and how.

Furthermore, to get a good answer you need to have asked a correct, simple questionconveying them in an understandable, correct and simple way. Like when you create a special (and protected) configuration of a cell phone that will be used by children. This is obviously to avoid variability in the interpretation of the digital assistant and the patient, which would create an enormous bias for the doctor, he adds.

See also  Invitation to the press conference of the German Medical Association on Tuesday, April 18, 2023, 12 ...

ChatGPT could be used by the patient as a guide and support to understand the diagnostic process and the therapeutic decisions chosen by the doctor, and thus improve adherence, for example, to the prescribed therapy. The clinician always has the last word as regards the decisions to be taken, such as the therapy chosen and the modalities with which to administer it, as well as the modalities with which to face the diagnostic process. The doctor, therefore, should remain the final decision-maker, making use of an assistant, even virtual, but who acts only as an assistant: very good, but only that. No employee would ever seriously push himself to take over from his boss without knowing that he runs the great risk of being caught and fired. The virtual assistant (and others) have a fundamental task, even in medicine, which is to help, simplify, and speed up, and not try to become a clinician all at once.

Finally, the researchers raise another doubt, namely understanding what the “quality of response” of the patients means, and not leaving free interpretation to technological tools, because this exposes the risk of data interpretation, and as such is not always uniform and close to reality, if you go beyond numbers or dichotomous answers (yes/no). AND ChatGPT not always able to understand the nuances, or to ask subsequent questions to better understand and put our patient in one category or another. This task of the human part, and that is of the health workers. The merit of these clinicians is to bring out a tool that can, paradoxically, strengthen the relationship between healthcare professionals and their patients because it will allow the doctor to have more information, perhaps in real time, and to have more time to devote himself to the relationship with the person that it needs, which chatGPT wasn’t created for, he concludes.

April 28, 2023 (change April 28, 2023 | 15:02)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy