Home » Concerns about the adoption of ChatGpt in the US health system

Concerns about the adoption of ChatGpt in the US health system

by admin
Concerns about the adoption of ChatGpt in the US health system

After earning over 100 million users in just five months, now the OpenAI chatbot will be integrated as well in health care US, thanks to an agreement between Microsoft e Epic Systems, one of America’s largest healthcare software companies. A partnership that “is focused on providing a full range of integrated AI-based generative solutions to increase productivity and patient care”. This means that Epic will integrate Gpt-4, the latest version of the OpenAI model, in his electronic health record. But what will really be the “tasks” entrusted to artificial intelligence?

First, Gpt-4 will allow doctors and health professionals to edit automatically responses to patient messages, thus trying to lighten their duties. “Integrating generative AI into some of our day-to-day workflows will increase productivity for many of our providers, allowing them to focus on the clinical tasks that truly need their attentionsaid Chero Goswami, chief information officer of UW Health in Wisconsin. Then, as reported by Epic itself, the integration of AI will make “Easier for healthcare organizations to identify operational improvements, such as cost reductions” o he support for research activities by identifying specific trends in medical records. The goal of this integration, therefore, is none other than workflow optimization within the health sector, so that they can benefit patients and doctors.

View more

Clearly, the deal between Microsoft and Epic has created a lot of confusion. During a keynote held at the HIMSS conference in Chicago, industry experts agreed that ChatGpt needs a precise regulation when it will be adopted in healthcare facilities. Peter Lee, corporate vice president of Microsoft for research, urged industry leaders to familiarize themselves with OpenAi’s model in order to understand “whether this technology is appropriate for use and, if so, under what circumstances”. Reid Blackman, CEO of a company that provides consultancy services related to artificial intelligence, has clearly pointed out that a huge risk is that people think that an AI model is able to expose the reasoning behind the his statements. But is not so. The chatbot is meant for be persuasive without having to give explanations. A considerable risk if technology is applied to a delicate sector such as healthcare. And he’s not the only one.

Kay Firth-Butterfield, one of the experts who signed the famous letter asking to slow down research in the AI ​​sector, also asked other questions: is the data on which the chatbot is trained really inclusive? They are not necessarily excluded i three billion people all over the world without internet access? These are just some of the questions holding back the integration of AI in healthcare, in the United States and around the world. It is therefore clear that it is necessary to think to find a useful solution, provided that everyone agrees to entrust the health of patients to a chatbot.

See also  "Fourth dose? Don't get vaccinated »: endocrinologist doctor in the storm

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy