Healthcare AI Security Series
In this 2-part series, I sit down with Ken Ko, the Chief Technology Officer at Ferrum Health, to discuss security considerations that must be addressed when implementing healthcare AI.
Together we explore security issues, and challenges health systems face when implementing healthcare AI. We discuss potential solutions, looking at third-party vendor clouds and private, cloud-hosted environments, focusing on what keeps patient data secure.
Healthcare AI Security Series
Part 1: Security Considerations When Implementing Healthcare AI
Kathleen: Ken, please explain why security is an important consideration that needs to be addressed when implementing and using artificial intelligence (AI) in the healthcare space.
Ken Ko: When we’re using AI and ML algorithms, whether in a healthcare system or on your phone, there will be this underlying requirement for data and operating on said data. For what we’re used to with our day-to-day involvement with AI, we’re making that implicit tradeoff by giving away our behavior in the form of websites viewed, and buttons clicked in exchange for improved targeting of ads or a better spell check and word prediction in our email compose windows.
Healthcare and protected health information (PHI) is where people start to revisit that calculus. When PHI is more valuable than PII and credit cards on the black market, we must ask ourselves why. Unlike credit cards or SSNs, PHI isn’t something you can simply request a new number for or replace. Or a new biometric, for. Your medical record is just as big a part of your identity as your fingerprint is. Protecting PHI and our medical records, is one of the key pillars for any healthcare system, right behind providing good patient care.
Related Reading: What is protected health information (PHI)?

Image from Compliancy Group
Look at the landscape of AI out there, in just about any industry, and you’ll see that these algorithms are being tasked with hoovering up your data in order to eventually use that to improve themselves and ultimately increase their own commercial value. What’s missing in this equation is any consideration for the user. Whether the user wants to opt-in to that data collection and allowing the user to have the choice to have a software solution that intentionally does not institute that parasitic relationship between itself and its users. And, thus far, we’ve been assuming only good actors in this. With the importance and black-market value of this data, we need to consider that there are folks who we need to prevent from reading or obtaining someone’s PHI.
What this calls for is an environment that secures the data from third-party malicious actors, as well as from the algorithms themselves. Where a person can own their data without volunteering it (by default, I might add) to these third-party entities to possibly improve their commercial algorithms.
A healthcare system is complex, and we don’t want to be bleeding patient data.
Ken Ko, CTO, Ferrum Health Tweet
Kathleen: Please explain the security challenges health systems face when adopting and using AI.
Ken Ko: Healthcare systems have been hyper-focused on risk mitigation with real-world practice modeled after the mantra “if it ain’t broke, don’t fix it” when it comes to their IT practices. And it’s no surprise as to why. There is a deluge of inconsistent systems and manual change controls scheduled throughout the year. And there is a multitude of manual processes built around the ceremonies of compliance and risk management. Without the proper investment from a system, the IT organization is forced to play a constant game of catch-up and firefighting.
Now, throw in the fact that these AI algorithms need specialized computing by way of GPUs, that they’re data hungry, and that their compute resources can be held up for minutes at a time for a single job, and you have a recipe for trouble. Consider that you would need to send data to a unique AI algorithm per diagnosis, and it’s obvious to see that this solution just doesn’t scale. Back when we were designing the infrastructure within hospital data centers, the notion of sending around 10s upon 10s of duplicate copies of patient data wasn’t even on our radars. Let alone any preparation for sending these terabytes of patient data externally to each vendor’s associated cloud for processing, if you don’t have extra GPU and CPU computing on-prem. With every copy of data, you send out of your data center, you also effectively lose control of how that data will be used now and, in the future, whether it’ll be leaked or if it’ll eventually be used for commercial gain.
A healthcare system is complex, and we don’t want to be bleeding patient data.
What are your thoughts related to healthcare AI and security considerations? Is your facility planning to implement healthcare AI? Have you already deployed AI? Share your ideas and questions in the comments below.
Join us next week for Part-2 in the security series as we explore solutions to address security issues in the healthcare AI space.

Kathleen Poulos
Kathleen is a registered nurse with a digital marketing background, a love for using technology to solve healthcare challenges and a passion for improving patient care.