Ensuring Validation in AI Radiology Algorithms to Meet Growing Regulations

Effective deployment of AI technologies relies not only on the algorithm but also on robust validation. A staggering 94%* of FDA-cleared algorithms exhibit performance variability when applied to real-world patient populations. This underscores the necessity for continuous validation and monitoring to not only ensure that AI initiatives meet the highest standards of clinicians but also to meet growing regulations. Ferrum Health is at the forefront of facilitating validation across numerous AI radiology algorithms for large health systems, and internal studies have shown that Ferrum Health helped address the issue of validation and discrimination for Sutter Health

When utilizing AI radiology vendors, hospitals often struggle with data bias, where algorithms trained on homogeneous datasets falter when faced with diverse patient populations. Simply reviewing the literature is insufficient, and manual validation methods are resource-intensive. Ferrum Health differentiates itself by automating the validation process using advanced language models to augment the radiology service. This automation reduces the manual workload clinicians face and enhances the reliability of AI tools across diverse demographics, ensuring optimal performance and validity.

By retrospectively running algorithms on the client’s own patient data, Ferrum quantifies both the clinical and business impacts of the various tools. This process highlights areas where algorithms may under- or over-call, providing critical insights for improvement of the algorithm. Our monitoring capabilities ensure that as the practice patterns, patient populations, and technology stack evolve, the performance of the algorithm library constantly improves. Additionally, updating allows for tracking algorithm improvements over time, identifying new blind spots, and addressing potential failure points before they can compromise patient care.

On May 6, 2024, the Office for Civil Rights at the U.S. Department of Health and Human Services (HHS) published a final rule applying section 1557 of the Affordable Care Act to healthcare algorithms. This rule mandates compliance programs to prevent algorithmic discrimination, including ongoing monitoring and risk management to ensure fairness. The webinar will cover these regulations and their impact on healthcare AI.

An upcoming webinar highlights the new regulations surrounding this. Join here: “New Final Regulation Prohibiting Algorithmic Discrimination by Health Care Providers and Payers.” 


Picture of Brendan Ryu

Brendan Ryu

Brendan is a fourth year medical student applying to radiology residency. He aspires to accelerate MedTech innovation and to build a career integrating innovation into clinical practice.

Contact Us

CASE STUDY

ARA Health Specialists

Use the button below to download your free case study and learn how our approach to validation has improved the number of clinically significant findings in AI software.

CASE STUDY

Sutter Health

Use the button below to download your free case study and learn how our approach to validation has improved the number of clinically significant findings in AI software.