WHO Releases Guidelines for Ethical Use of Generative AI in Healthcare
In a move towards the ethical governance of advancing generative artificial intelligence (AI) technology in healthcare, the World Health Organization (WHO) has issued comprehensive guidance on Large Multi-Modal Models (LMMs). These models, capable of accepting diverse data inputs such as text, videos, and images, have witnessed unprecedented adoption, with platforms like ChatGPT, Bard, and Bert entering the public consciousness in 2023.
The WHO’s guidance, comprising over 40 recommendations, targets governments, technology companies, and healthcare providers, aiming to ensure responsible use of LMMs for the promotion and protection of population health. Dr. Jeremy Farrar, WHO Chief Scientist, stressed the potential benefits of generative AI technologies in healthcare but underscored the need for transparent information and policies to manage the associated risks.
LMMs, known for their mimicry of human communication and ability to perform tasks not explicitly programmed, exhibit five broad applications in healthcare, as outlined by the WHO. These include diagnosis and clinical care, patient-guided use for investigating symptoms and treatment, clerical and administrative tasks within electronic health records, medical and nursing education through simulated patient encounters, and scientific research and drug development to identify new compounds.
However, the guidance highlights documented risks associated with LMMs, including the production of false, inaccurate, or biased information. This poses potential harm to individuals relying on such information for making critical health decisions. The quality and bias of training data, concerning factors like race, ethnicity, ancestry, sex, gender identity, or age, could compromise the integrity of LMM outputs.
Beyond individual risks, the WHO acknowledges broader challenges to health systems stemming from LMMs. These include concerns about the accessibility and affordability of the most advanced LMMs, potential ‘automation bias’ by healthcare professionals and patients, and cybersecurity vulnerabilities jeopardizing patient information and the trustworthiness of AI algorithms in healthcare provision.
Need Stakeholders Engagement for LLMs Deployment
To address these challenges, the WHO emphasizes the need for engagement from various stakeholders throughout the development and deployment of LMMs. Governments, technology companies, healthcare providers, patients, and civil society are called upon to participate actively in ensuring the responsible use of AI technologies.
The guidance provides specific recommendations for governments, placing the primary responsibility on them to set standards for the development, deployment, and integration of LMMs into public health and medical practices.
Governments are urged to invest in or provide not-for-profit or public infrastructure, including computing power and public data sets, accessible to developers in various sectors. These resources would be contingent on users adhering to ethical principles and values. Laws, policies, and regulations are to be employed to ensure that LMMs in healthcare meet ethical obligations and human rights standards, safeguarding aspects like dignity, autonomy, and privacy.
The guidance also suggests the assignment of regulatory agencies, existing or new, to assess and approve LMMs and applications intended for healthcare use, within the constraints of available resources. Furthermore, mandatory post-release auditing and impact assessments by independent third parties are recommended for large-scale LMM deployments. These assessments should include considerations for data protection and human rights, with outcomes and impacts disaggregated by user characteristics, such as age, race, or disability.
Developers of LMMs are also entrusted with key responsibilities. They are urged to ensure that potential users and all direct and indirect stakeholders, including medical providers, scientific researchers, healthcare professionals, and patients, are engaged from the early stages of AI development. Transparent, inclusive, and structured design processes should allow stakeholders to raise ethical issues, voice concerns, and provide input.
Additionally, LMMs should be designed to perform well-defined tasks with necessary accuracy and reliability to enhance health systems and advance patient interests. Developers must also possess the capability to predict and understand potential secondary outcomes of their AI applications.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.