WHO Stresses the Importance of Safe and Ethical AI in Promoting Health and Well-Being
In Brief
The World Health Organization (WHO) has released a statement urging the safe and ethical use of AI and LLMs in the healthcare sector.
WHO said that the hasty adoption of untested systems could lead to errors by healthcare workers and harm patients.
WHO recommends a thorough assessment of the tangible advantages of AI in the healthcare industry before its broad implementation.
The World Health Organization (WHO) has joined the conversation surrounding artificial intelligence (AI) and large language models (LLM), calling for their safe and ethical use to protect and promote human well-being, human safety, and autonomy and preserve public health.
With the rapid development of generative AI platforms such as OpenAI’s ChatGPT, Google’s Bard, and others, artificial intelligence has the potential to transform the healthcare industry by analyzing large amounts of patient and clinical data to develop potential new drugs and therapies and providing insights that can help doctors create personalized treatment plans for patients.
AI can also help identify potential diseases and recommend preventative measures before symptoms appear. Recently, researchers have discovered a breakthrough AI model that can accurately predict people’s risk of developing pancreatic cancer.
Amid the ongoing discussions surrounding the regulation of AI, the WHO raised concerns about the technology being used for harmful purposes and is calling for the development of safeguards to mitigate risks that can cause harm to patients and the healthcare industry.
The organization states that it is crucial that the risks be carefully examined when using LLMs to improve access to health data, as a decision support tool, or to boost diagnostic capacity in under-resourced settings. While the WHO supports the appropriate use of new technologies, it is concerned that caution is not consistently exercised with LLMs.
According to the WHO, concerns that call for rigorous oversight needed for AI and LLMs to be used in safe, effective, and ethical ways include:
- Biased data may be used to train AI, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user but may be completely incorrect or contain serious errors;
- LLMs may be trained on data obtained without permission, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response;
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that the public cannot discern from reliable health content;
- While it supports leveraging new technologies, including AI and digital health, to improve human health, WHO encourages policy-makers to prioritize patient safety and protection while technology firms work to commercialize LLMs.
WHO proposes that these concerns be addressed and recommends a thorough assessment of the tangible advantages of AI in the healthcare industry before its broad implementation. In 2021, the organization published guidance on the Ethics & Governance of Artificial Intelligence for Health. The report states that the development of AI technologies must put ethics and human rights at the heart of its design, deployment, and use.
Read more:
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.
More articlesCindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.