WHO Issues Guidance on Healthcare AI Ethics

WHO Issues Guidance on Healthcare AI Ethics mind map
Recent News
WHO releases guidance
For large multi-modal models (LMMs)
Date: 18 January 2024
When
Released on 18 January 2024
Why
To manage risks
Associated with AI in healthcare
What
Over 40 recommendations
For governments, tech companies, healthcare providers
Applications of LMMs
Diagnosis and clinical care
Patient-guided use
Clerical and administrative tasks
Medical and nursing education
Scientific research and drug development
Risks
False, inaccurate, biased, incomplete statements
Poor quality or biased data
Accessibility and affordability
Automation bias
Cybersecurity risks
Where
Global application
In healthcare sector
Who
World Health Organization (WHO)
Dr. Jeremy Farrar, WHO Chief Scientist
Stakeholders
Governments
Technology companies
Healthcare providers
Patients
Civil society
How
Involvement of stakeholders
At all stages
Model development
Implementation
Governments to invest
In non-profit structures
In upholding ethical obligations
And human rights standards
Regulatory bodies
Establish and introduce
Mandatory post-publication review
Impact assessment
Significance
Improving healthcare
Enhancing clinical trials
Improving diagnosis, treatment, self-care
Supplementing professional knowledge
Overcoming health inequities
Challenges
Potential for harm
Due to inaccurate information
Equity and accessibility issues
Over-reliance on automation
Way Forward
Ethical use of AI
Adhering to WHO guidelines
Continuous monitoring
And improvement

The World Health Organization (WHO) released guidance on the ethics and governance of large multi-modal models (LMMs), a type of rapidly growing generative AI technology with applications in healthcare. This guidance, issued on January 18, 2024, aims to manage the risks associated with AI in healthcare. It includes over 40 recommendations for governments, technology companies, and healthcare providers. The guidance addresses the potential for AI to improve healthcare but also highlights risks such as producing false or biased information, cybersecurity threats, and accessibility issues. WHO emphasizes the need for involving various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society, in all stages of AI model development and implementation. The organization underscores the significance of these guidelines in improving healthcare and overcoming health inequities, while also acknowledging the challenges and potential for harm due to misuse or misinterpretation of AI technology in healthcare. The way forward includes adhering to these guidelines and continuously monitoring and improving the ethical use of AI in healthcare.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
X
Home Courses Plans Account