The World Health Organization (WHO) has released a statement urging the safe and ethical use of AI and LLMs in the healthcare sector.
WHO said that the hasty adoption of untested systems could lead to errors by healthcare workers and harm patients.
WHO recommends a thorough assessment of the tangible advantages of AI in the healthcare industry before its broad implementation.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
The World Health Organization (WHO) has joined the conversation surrounding artificial intelligence (AI) and large language models (LLM), calling for their safe and ethical use to protect and promote human well-being, human safety, and autonomy and preserve public health.
With the rapid development of generative AI platforms such as OpenAI’s ChatGPT, Google’s Bard, and others, artificial intelligence has the potential to transform the healthcare industry by analyzing large amounts of patient and clinical data to develop potential new drugs and therapies and providing insights that can help doctors create personalized treatment plans for patients.
AI can also help identify potential diseases and recommend preventative measures before symptoms appear. Recently, researchers have discovered a breakthrough AI model that can accurately predict people’s risk of developing pancreatic cancer.
Amid the ongoing discussions surrounding the regulation of AI, the WHO raised concerns about the technology being used for harmful purposes and is calling for the development of safeguards to mitigate risks that can cause harm to patients and the healthcare industry.
The organization states that it is crucial that the risks be carefully examined when using LLMs to improve access to health data, as a decision support tool, or to boost diagnostic capacity in under-resourced settings. While the WHO supports the appropriate use of new technologies, it is concerned that caution is not consistently exercised with LLMs.
According to the WHO, concerns that call for rigorous oversight needed for AI and LLMs to be used in safe, effective, and ethical ways include:
- Biased data may be used to train AI, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user but may be completely incorrect or contain serious errors;
- LLMs may be trained on data obtained without permission, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response;
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that the public cannot discern from reliable health content;
- While it supports leveraging new technologies, including AI and digital health, to improve human health, WHO encourages policy-makers to prioritize patient safety and protection while technology firms work to commercialize LLMs.
WHO proposes that these concerns be addressed and recommends a thorough assessment of the tangible advantages of AI in the healthcare industry before its broad implementation. In 2021, the organization published guidance on the Ethics & Governance of Artificial Intelligence for Health. The report states that the development of AI technologies must put ethics and human rights at the heart of its design, deployment, and use.
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.