WHO Issues Guidance on Healthcare AI Ethics

WHO Issues Guidance on Healthcare AI Ethics mind map
Recent News
WHO releases guidance
For large multi-modal models (LMMs)
Date: 18 January 2024
Released on 18 January 2024
To manage risks
Associated with AI in healthcare
Over 40 recommendations
For governments, tech companies, healthcare providers
Applications of LMMs
Diagnosis and clinical care
Patient-guided use
Clerical and administrative tasks
Medical and nursing education
Scientific research and drug development
False, inaccurate, biased, incomplete statements
Poor quality or biased data
Accessibility and affordability
Automation bias
Cybersecurity risks
Global application
In healthcare sector
World Health Organization (WHO)
Dr. Jeremy Farrar, WHO Chief Scientist
Technology companies
Healthcare providers
Civil society
Involvement of stakeholders
At all stages
Model development
Governments to invest
In non-profit structures
In upholding ethical obligations
And human rights standards
Regulatory bodies
Establish and introduce
Mandatory post-publication review
Impact assessment
Improving healthcare
Enhancing clinical trials
Improving diagnosis, treatment, self-care
Supplementing professional knowledge
Overcoming health inequities
Potential for harm
Due to inaccurate information
Equity and accessibility issues
Over-reliance on automation
Way Forward
Ethical use of AI
Adhering to WHO guidelines
Continuous monitoring
And improvement

The World Health Organization (WHO) released guidance on the ethics and governance of large multi-modal models (LMMs), a type of rapidly growing generative AI technology with applications in healthcare. This guidance, issued on January 18, 2024, aims to manage the risks associated with AI in healthcare. It includes over 40 recommendations for governments, technology companies, and healthcare providers. The guidance addresses the potential for AI to improve healthcare but also highlights risks such as producing false or biased information, cybersecurity threats, and accessibility issues. WHO emphasizes the need for involving various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society, in all stages of AI model development and implementation. The organization underscores the significance of these guidelines in improving healthcare and overcoming health inequities, while also acknowledging the challenges and potential for harm due to misuse or misinterpretation of AI technology in healthcare. The way forward includes adhering to these guidelines and continuously monitoring and improving the ethical use of AI in healthcare.

Related Posts

Notify of
Inline Feedbacks
View all comments
Home Courses Plans Account