Everybody Gets Sued: Stakeholders and Liability Mitigation Strategies in Healthcare AI
The integration of artificial intelligence (AI) into medical practices, particularly in fields like radiology, holds immense promise for enhancing diagnostic accuracy and patient outcomes. However, as with any technological advancement, concerns regarding liability arise when AI systems make errors. Within the realm of medical AI, liability is a complex issue involving multiple stakeholders, including physicians, AI vendors, and healthcare enterprises. This article delves into the intricate landscape of medical AI liability and proposes strategies to foster responsible AI adoption.
When errors with AI tools inevitably occur, determining where the responsibility lies becomes pivotal. Radiologists are the sixth most likely to be sued among all medical specialties. The legal landscape, currently lacking comprehensive laws specifically addressing medical AI liability, treats AI as a device. Consequently, the primary responsibility often rests with the human user, such as the radiologist. The legal approach draws from tort law principles, with physician liability in the scenario of errors based on the use of AI tools determined on a case-by-case basis and often influenced by the unpredictability of jury decisions.
Physician Liability
Physicians play a central role in the responsible integration of AI into medical practices. To effectively manage liability while promoting the adoption of AI, physicians can consider implementing the following strategies:
- Active Patient Consent: Physicians must engage patients in the decision-making process by transparently discussing the use of AI assistance. In radiology, this could mean a precise patient consent form prior to imaging, describing the use-case of AI tools at that practice. Ensure transparency by openly communicating with patients when AI assistance is employed in their diagnoses or treatment plans.
- Always follow the standard of care: When utilizing AI clinical decision support, maintain alignment with established standards of care. This practice can single-handedly mitigate liability in cases of misdiagnosis or errors. As AI tools become enmeshed into the healthcare system, the meaning of “standard of care” may be redefined and may include using AI tools as standard practice.
- Self-driven education:
- Primarily, physicians must possess the knowledge to comprehend the appropriate use of AI tools, interpret their recommendations accurately, and be able to consistently recognize outcomes from its use, be it positive in the form of correct diagnoses or negative in the form of errors. One way this could be achieved is through certification in imaging informatics (CIIP). Imaging informatics has also become a part of fellowships offered by several institutes.
- Trust between AI and MD is essential to the successful uptake of AI in medicine. Physicians should develop proficiency in assessing the confidence intervals (scores) of AI recommendations and making contextual decisions based on this assessment. There is also the need to acknowledge the presence of biases introduced from the use of AI, including implicit biases to confirmation bias.
- Use-Case Consensus: There needs to be a consensus on specific use-case scenarios for the implementation of AI. Rigorously assess the appropriateness of AI deployment in different clinical scenarios, ensuring that they’re being used only in alignment with genuine clinical needs.
- Malpractice Insurance Coverage: Physicians should move to ensure that their malpractice insurance covers potential scenarios involving the use of AI and how it outlines their liability in the coverage.
- Collaboration: Radiologists who utilize these AI tools can collaborate with professional organizations and regulatory bodies to provide feedback and help develop safeguards that enhance the responsible use of AI in medical settings.
Vendor Liability
- Autonomous models– Vendors offering autonomous AI models should assume full liability for the accuracy and reliability of their systems. This approach incentivizes vendors to prioritize the quality and clinical applicability of their autonomous AI solutions prior to deployment.
- Best practices: In addition, vendors should regularly update and enhance AI systems through patches and updates to improve performance and reduce the occurrence of errors.
- Tie-breaker: Vendors can develop adjudicator algorithms that act as tie-breakers in cases of AI recommendations conflicting with physician recommendations. These can be developed in collaboration with physicians to ensure that standard practice of care is employed.
- Data privacy: Once patients are involved in the decision-making process, AI developers should work towards strengthening their system security measures to safeguard against potential privacy and data breaches that could compromise public confidence in AI-generated insights.
Undoubtedly, the party that stands to gain the most financially from the success of the AI tool must also be held equally responsible for liability.
Enterprise, Platform, Regulatory Body Responsibility
Healthcare enterprises play a crucial role in managing medical liability.
- Enterprise administrators can collaborate closely with AI vendors to establish clear and comprehensive contractual terms, including agreements regarding the sharing/assignment of liability, with the emphasis that these tools are being thoroughly evaluated for clinical suitability and accuracy in that particular practice.
- Imaging practices must establish two-way communication channels with their radiologists to keep them informed of the AI tools that are to be deployed as well as provide context for their use.
Enterprise responsibility is closely tied with vendor and regulatory body responsibilities.
- Ongoing Improvement: Vendors should be motivated to be able to provide updates and improve their algorithms. Adaptive AI tools are currently approved by the FDA as “locked” versions. Active clinician (end-user) feedback can be incorporated for improvement if vendors are incentivized to do so through the regulatory bodies.
- Performance Data: Physicians would feel confident in adopting AI tools if they don’t end up getting the short end of the stick. As more physicians adopt AI tools and use them in clinical scenarios, there is more data on the performance of AI tools that can be generated. These data are relevant to all involved parties and can be used to improve the performance of the AI tools.
- Off-label use vs. commercialization of algorithms: Regulatory bodies like the FDA can be responsible for ensuring that these tools are safe to use and provide guidelines on the specific tasks that these tools can be expected to perform confidently i.e., triage or clinical decision-support. The end-users (radiologists) should be free to implement them for “off-label use” by deciding where the tools are most efficacious and reliable. This way, the AI tools will have the most benefit in the community.
The physician is merely the starting point, but the buck does not end with them. The shared accountability model is beneficial for all stakeholders, with the end goal of improving access to healthcare and patient outcomes.
As AI becomes increasingly prevalent in medical practices, addressing liability concerns becomes imperative. Responsible AI adoption necessitates collaboration. We can achieve this by embracing strategies that put education, transparency, accountability, and ongoing improvement at the forefront. By doing so, we empower the medical community to unlock the full potential of AI while ensuring the safety and well-being of our patients, all the while maintaining the trust that people have in our healthcare services.

Siddhi Hegde
Research Fellow and aspiring radiologist exploring new technologies in patient care.