This week’s Domain Knowledge focuses on the ethical use of artificial intelligence (AI) in the healthcare space. AI has the opportunity to augment many areas of patient care, impacting everything from improving clinical diagnosis, enhancing peer learning to limiting bias in treatment planning. To be effective, AI must put ethics and human rights at the core of AI design, deployment, and use.
The World Health Organization (WHO) recently provided guidance on the use of healthcare AI. Their report, Ethics and Governance of Artificial Intelligence for Health, addresses the challenges and risks inherent in using AI in the healthcare setting. The WHO has outlined six principles to ensure AI benefits are realized globally. Their report is free to download.
To learn more about the real-time application of the six principles outlined by the WHO, look no further than the article entitled AI ethical principles already in place at Sanford Health.
Harvard School of Public Health looks at how Algorithmic bias in health care exacerbates social inequities and ways to prevent it. They highlight two approaches to solving the problem, calibrating incentives and formal legislation.
In the piece, Responsible AI: leveraging data and technology to counteract bias, STAT addresses the need for responsible AI. The article walks the reader through how to determine if a model is biased, the need for education, and how to benchmark and track progress.
That’s a wrap for this week’s review of news and happenings in the healthcare AI space. In closing, I’ll leave you with an invitation to learn more about the benefits of using an end-to-end AI platform to manage multiple applications across your clinical needs and offer you a personal demo of Ferrum’s platform and growing AI catalog.
If you have AI tips, suggestions, or resources you’d like to share leave us a note below, and please feel free to suggest topics you would like to see covered in future posts.