Home » the WHO dictates the “rules” for ChatGPT, Bard, Bert (and the others) – breaking latest news

the WHO dictates the “rules” for ChatGPT, Bard, Bert (and the others) – breaking latest news

by admin
the WHO dictates the “rules” for ChatGPT, Bard, Bert (and the others) – breaking latest news

by Ruggiero Corcella

World Health Organization publishes new Guidelines on Ethics and Governance of AI-Based Large Language Models (LLMs)

Not only the World Economic Forum: artificial intelligence and its applications in healthcare also dictates the agenda of the World Health Organization (WHO) which returns to dealing with large language models: such as ChatGPT, Bard and Bert, so to speak. It does so by publishing new Guidelines on the ethics and governance of large multimodal models (LMMs), a rapidly growing type of generative artificial intelligence (AI) technology with applications across healthcare.

The guide, which is based on Guidelines on Artificial Intelligence in healthcare developed in 2021contains more than 40 recommendations for consideration by governments, technology companies and healthcare providers to ensure the appropriate use of LMMs to promote and protect the health of populations.

In May 2023, the WHO itself had warned and asked for caution in the use of Large language models tools generated by Artificial Intelligence (AI) to protect and promote human well-being, human security and autonomy and preserve public health.

Transparent information and policies needed

LMMs can accept one or more types of data input, such as text, video, and images, and generate different outputs that are not limited to the type of data input. LMMs are unique in their imitation of human communication and ability to perform tasks for which they were not explicitly programmed. LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023. Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate and use these technologies identify and fully take into account the associated risks, reiterates Jeremy Farrar, WHO’s chief scientist. We need transparent information and policies to manage the design, development and use of LMMs to achieve better health outcomes and overcome persistent health inequities.

The 5 fields of application in healthcare

See also  Reduce stress without spending too much money: what should never be missing from your home to feel good about yourself

The new WHO guidelines outline five broad areas of applications of LMMs for health:
1.Diagnosis and clinical assistance, how to answer patients’ written questions; 2.Patient-led use, for example to investigate symptoms and treatment; 3.Administrative and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
4. Medical and nursing training, including providing trainees with simulated patient encounters;
5.Scientific research and drug development, including for the identification of new compounds.

Potential risks and benefits of LMM

While LMMs are beginning to be used for specific health-related purposes, there are also documented risks of producing false, inaccurate, partial, or incomplete statements, which could harm individuals who use such information in making health decisions. Additionally, LMMs may be trained on low-quality or biased data, based on race, ethnicity, ancestry, sex, gender identity, or age. The guidance also describes broader risks to health systems, such as the accessibility and affordability of the best performing LMMs.

The LMM can also encourage automation bias on the part of healthcare providers and patients, in which errors that would otherwise have been identified are overlooked or difficult choices are inappropriately delegated to an LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the reliability of these algorithms and the delivery of healthcare more generally.

Security passes through the involvement of all actors

To create safe and effective LMMs, WHO highlights the need to involve various stakeholders: governments, technology companies, healthcare workers, patients and civil society, in all phases of development and deployment of such technologies, including their supervision and regulation. Governments of all countries must cooperatively lead efforts to effectively regulate the development and use of artificial intelligence technologies, such as LMMs, adds Alain Labrique, WHO Director for Digital Health and Innovation in the Division scientific.

See also  Tingling in the hand and arm, dizziness and nausea could be symptoms of a disease that should be treated immediately to prevent it from degenerating

The key recommendations

The new WHO guidelines include recommendations for governments, who have primary responsibility for setting standards for the development and dissemination of LMMs, and their integration and use for medical and public health purposes.

For example, governments should:
•Invest in or provide public or non-profit infrastructure, including computing power and public datasets, accessible to public, private and non-profit sector developers, requiring users to adhere to ethical principles and values ​​in return of access.
•Use laws, policies and regulations to ensure that LMMs and applications used in healthcare and medicine, regardless of the risk or benefit associated with AI technology, meet ethical obligations and human rights standards that impact, for example for example, on a person’s dignity, autonomy or privacy.
•Assign an existing or new regulatory agency to evaluate and approve LMMs and applications intended for use in healthcare or medicine, as resources permit.
•Introduce mandatory post-publication audits and impact assessments, including for data protection and human rights, by independent third parties when an LMM is deployed at scale. The audit and impact assessments should be published and include findings and impacts disaggregated by user type, for example by age, race or disability.

The guidelines also include key recommendations for LMM developers, who should ensure that:
•LMMs are designed not only by scientists and engineers. Potential users and all direct and indirect stakeholders, including medical service providers, scientific researchers, healthcare professionals and patients, should be involved from the early stages of AI development in a structured, inclusive and transparent design and should have the opportunity to raise ethical questions, express concerns, and provide input for the AI ​​application under consideration.
•LMMs are designed to perform well-defined tasks with the accuracy and reliability necessary to improve the capacity of health systems and promote the interests of patients. Developers should also be able to predict and understand potential secondary outcomes.

See also  Third dose for 80-year-olds, guests of the Rsa and health workers over 60 at risk New circular from the ministry

Corriere della Sera also on Whatsapp. sufficient click here to subscribe to the channel and always be updated.

January 18, 2024 (modified January 18, 2024 | 3:54 pm)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy