Keys to Building Trust in Healthcare AI
AI will play a role in advancing the healthcare industry, with the potential impact extending far beyond what we have seen to this point. Many people have reservations about adopting AI in healthcare due to the complexity of the technology and uncertainty around how it works. Acceptance and trust in healthcare AI will come as people gain an understanding of how the technology is developed and what successful integration looks like. This week’s Domain Knowledge highlights uses for AI that creates value and builds trust.
The American Medical Association outlines the indicators of trustworthy algorithms for healthcare deployment. AI models should be considered trustworthy if they are validated on populations reflective of the medical practice, continuously monitored to identify and communicate changes, and can be integrated smoothly. Understanding these criteria will help the industry move forward in confidently deploying AI.
Being confident the use of AI will produce consistent, quality results can drive trust in the technology. Thoroughly testing AI algorithms is the best way to identify and overcome potential biases that would impact results. The NHS AI Lab is creating a blueprint for testing AI models by assessing COVID-19 chest imaging AI algorithms for potential bias. James Zou from Stanford University is also working on a method of testing AI-based on real-world data.
Approaching AI as a ‘teammate’, rather than a replacement, will help healthcare workers accept AI. A new teaching technique for radiologists captures this idea as both the radiologist and the AI model start by answering a question, but then the radiologist is asked to evaluate the AI response compared to the true answer. This helps the person in training understand how the model behaves and how to interact with it in a real-life application.
Clinician interaction is an important function in building trust in an AI algorithm. Algorithms that allow interaction beyond just accepting or rejecting results are far more helpful to the clinicians that use them. The ability to review data and edit results helps improve the quality of the model and keeps control of important healthcare decisions in the hands of healthcare professionals.
That’s a wrap for this week’s review of news and happenings in the healthcare AI space. In closing, I’ll leave you with an invitation to learn more about the benefits of using an AI Hub to manage multiple applications across your clinical needs and offer you a personal demo of Ferrum’s platform and growing AI catalog.
If you have AI tips, suggestions, or resources you’d like to share leave us a note below, and please feel free to suggest topics you would like to see covered in future posts.