World Health Organization (WHO) Releases Guidelines for Regulating AI in Healthcare
The World Health Organization (WHO) has issued guidelines for the responsible regulation of AI in healthcare, prioritizing safety and ethical deployment.
The WHO’s comprehensive framework outlines six critical areas for regulating AI in healthcare, emphasizing transparency, risk management, and collaboration among stakeholders.
The World Health Organization (WHO) announced guidelines for regulating AI in the healthcare sector, highlighting the critical need to ensure the safety and effectiveness of AI systems. The publication underscores the potential of AI to revolutionize healthcare, emphasizing the importance of transparent, ethical and secure AI deployment. It calls for open dialogues among various stakeholders, including developers, regulators, manufacturers, healthcare professionals and patients.
The growing availability of healthcare data and the rapid advancements in AI technologies offer opportunities to transform the healthcare sector. The organization acknowledges the technology’s potential to enhance health outcomes by bolstering clinical trials, improving medical diagnostics and treatment, empowering self-care, and providing person-centered healthcare.
In areas with limited medical specialists, such as interpreting retinal scans and radiology images, artificial intelligence holds promise.
Nonetheless, the rapid deployment of AI technologies, including large language models, without a comprehensive understanding of their potential implications, presents challenges. AI systems, especially when utilizing healthcare data, may access sensitive personal information, necessitating robust legal and regulatory frameworks for protecting privacy, security and integrity.
“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. “This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks.”
WHO’s Blueprint for Responsible AI Integration in Healthcare
To meet the urgent demand for responsible oversight of the rapid expansion of AI health technologies, the WHO provides a thorough framework covering six key areas for regulating artificial intelligence in healthcare.
- Transparency and Documentation: Stressing the importance of transparency and comprehensive documentation throughout the product lifecycle and development processes.
- Risk Management: Addressing issues like ‘intended use,’ ‘continuous learning,’ human interventions, training models and cybersecurity threats.
- External Validation: Emphasizing the importance of external validation of data and clarity in the intended use of AI to ensure safety and facilitate regulation.
- Data Quality: Committing to rigorous pre-release evaluations to prevent AI systems from amplifying biases and errors.
- Regulatory Challenges: Addressing complex regulations, such as GDPR in Europe and HIPAA in the United States, focusing on jurisdiction and consent requirements to ensure privacy and data protection.
- Collaboration: Promoting collaboration among regulatory bodies, patients, healthcare professionals, industry representatives, and government partners to maintain compliance throughout the lifecycles of AI products and services.
AI systems are intricate and reliant on their code and on the data they are trained on, often from clinical settings and user interactions. To mitigate the risks of amplifying biases, regulations can be employed to ensure that training data includes diverse attributes like gender, race, and ethnicity.
The publication aims to provide governments and regulatory authorities with key principles for developing new guidance or adapting existing regulations on AI at national or regional levels, ensuring a responsible and ethical integration of the tech into healthcare practices.
The health organization previously voiced concerns about the responsible and ethical use of artificial intelligence, specifically large language models (LLMs), emphasizing the importance of safeguarding human well-being, safety, and autonomy, along with preserving public health.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.