News Report Technology
January 19, 2024

WHO Releases Guidelines for Ethical Use of Generative AI in Healthcare

WHO Releases Guidelines for Ethical Use of Generative AI in Healthcare

In a move towards the ethical governance of advancing generative artificial intelligence (AI) technology in healthcare, the World Health Organization (WHO) has issued comprehensive guidance on Large Multi-Modal Models (LMMs). These models, capable of accepting diverse data inputs such as text, videos, and images, have witnessed unprecedented adoption, with platforms like ChatGPT, Bard, and Bert entering the public consciousness in 2023.

The WHO’s guidance, comprising over 40 recommendations, targets governments, technology companies, and healthcare providers, aiming to ensure responsible use of LMMs for the promotion and protection of population health. Dr. Jeremy Farrar, WHO Chief Scientist, stressed the potential benefits of generative AI technologies in healthcare but underscored the need for transparent information and policies to manage the associated risks.

LMMs, known for their mimicry of human communication and ability to perform tasks not explicitly programmed, exhibit five broad applications in healthcare, as outlined by the WHO. These include diagnosis and clinical care, patient-guided use for investigating symptoms and treatment, clerical and administrative tasks within electronic health records, medical and nursing education through simulated patient encounters, and scientific research and drug development to identify new compounds.

However, the guidance highlights documented risks associated with LMMs, including the production of false, inaccurate, or biased information. This poses potential harm to individuals relying on such information for making critical health decisions. The quality and bias of training data, concerning factors like race, ethnicity, ancestry, sex, gender identity, or age, could compromise the integrity of LMM outputs.

Beyond individual risks, the WHO acknowledges broader challenges to health systems stemming from LMMs. These include concerns about the accessibility and affordability of the most advanced LMMs, potential ‘automation bias’ by healthcare professionals and patients, and cybersecurity vulnerabilities jeopardizing patient information and the trustworthiness of AI algorithms in healthcare provision.

Need Stakeholders Engagement for LLMs Deployment

To address these challenges, the WHO emphasizes the need for engagement from various stakeholders throughout the development and deployment of LMMs. Governments, technology companies, healthcare providers, patients, and civil society are called upon to participate actively in ensuring the responsible use of AI technologies.

The guidance provides specific recommendations for governments, placing the primary responsibility on them to set standards for the development, deployment, and integration of LMMs into public health and medical practices.

Governments are urged to invest in or provide not-for-profit or public infrastructure, including computing power and public data sets, accessible to developers in various sectors. These resources would be contingent on users adhering to ethical principles and values. Laws, policies, and regulations are to be employed to ensure that LMMs in healthcare meet ethical obligations and human rights standards, safeguarding aspects like dignity, autonomy, and privacy.

The guidance also suggests the assignment of regulatory agencies, existing or new, to assess and approve LMMs and applications intended for healthcare use, within the constraints of available resources. Furthermore, mandatory post-release auditing and impact assessments by independent third parties are recommended for large-scale LMM deployments. These assessments should include considerations for data protection and human rights, with outcomes and impacts disaggregated by user characteristics, such as age, race, or disability.

Developers of LMMs are also entrusted with key responsibilities. They are urged to ensure that potential users and all direct and indirect stakeholders, including medical providers, scientific researchers, healthcare professionals, and patients, are engaged from the early stages of AI development. Transparent, inclusive, and structured design processes should allow stakeholders to raise ethical issues, voice concerns, and provide input.

Additionally, LMMs should be designed to perform well-defined tasks with necessary accuracy and reliability to enhance health systems and advance patient interests. Developers must also possess the capability to predict and understand potential secondary outcomes of their AI applications.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Kumar is an experienced Tech Journalist with a specialization in the dynamic intersections of AI/ML, marketing technology, and emerging fields such as crypto, blockchain, and NFTs. With over 3 years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and delivering comprehensive insights. Kumar's expertise lies in producing high-impact content, including articles, reports, and research publications for prominent industry platforms. With a unique skill set that combines technical knowledge and storytelling, Kumar excels at communicating complex technological concepts to diverse audiences in a clear and engaging manner.

More articles
Kumar Gandharv
Kumar Gandharv

Kumar is an experienced Tech Journalist with a specialization in the dynamic intersections of AI/ML, marketing technology, and emerging fields such as crypto, blockchain, and NFTs. With over 3 years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and delivering comprehensive insights. Kumar's expertise lies in producing high-impact content, including articles, reports, and research publications for prominent industry platforms. With a unique skill set that combines technical knowledge and storytelling, Kumar excels at communicating complex technological concepts to diverse audiences in a clear and engaging manner.

Hot Stories
Join Our Newsletter.
Latest News

The DOGE Frenzy: Analysing Dogecoin’s (DOGE) Recent Surge in Value

The cryptocurrency industry is rapidly expanding, and meme coins are preparing for a significant upswing. Dogecoin (DOGE), ...

Know More

The Evolution of AI-Generated Content in the Metaverse

The emergence of generative AI content is one of the most fascinating developments inside the virtual environment ...

Know More
Join Our Innovative Tech Community
Read More
Read more
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (22-26.04)
Digest Business Markets Technology
This Week’s Top Deals, Major Investments in AI, IT, Web3, and Crypto (22-26.04)
April 26, 2024
Vitalik Buterin Comments On Centralization Of PoW, Notes It Was Temporary Stage Until PoS
News Report Technology
Vitalik Buterin Comments On Centralization Of PoW, Notes It Was Temporary Stage Until PoS
April 26, 2024
Offchain Labs Reveals Discovery Of Two Critical Vulnerabilities In Optimism’s OP Stack’s Fraud Proofs
News Report Software Technology
Offchain Labs Reveals Discovery Of Two Critical Vulnerabilities In Optimism’s OP Stack’s Fraud Proofs
April 26, 2024
Dymension’s Open Market For Bridging Liquidity From RollApps eIBC Launches On Mainnet 
News Report Technology
Dymension’s Open Market For Bridging Liquidity From RollApps eIBC Launches On Mainnet 
April 26, 2024