Healthcare AI Security Series
In this 2-part series, I sit down with Ken Ko, the Chief Technology Officer at Ferrum Health, to discuss security considerations that need to be addressed when implementing healthcare AI.
Together we explore security issues and challenges faced by health systems when implementing healthcare AI. We discuss potential solutions, looking at the use of third-party vendor clouds and private, cloud-hosted environments, focusing on what keeps patient data secure.
Healthcare AI Security Series
Part 2: Solutions for Addressing Security Concerns When Implementing Healthcare AI
In part-1 of the series, we explored concerns and challenges faced by health systems when implementing artificial intelligences (AI). In the second part of the series, we look at potential solutions that keep patient data secure and pave the way for the use of AI in the healthcare setting.
Katheen: What are potential solutions for dealing with these challenges?
Ken Ko: One of the first major problems anyone on the IT team will see is that they’re being asked to send duplicate data of high-resolution imaging to these third-party vendor clouds. As a technologist, that’s incredibly wasteful. As a patient, that’s incredibly worrisome. We simply do not know what’s going on behind closed doors once a company obtains our patient information–whether it’s lab results or imaging, or reports.
Recall that “the cloud” is simply someone else’s data center. We need to ask ourselves what’s preventing us from hosting that same application in our own data center, where healthcare systems can confidently attest to the data residency of their patients’ information remaining confidential. Thus, the logical next step is to deploy these AI algorithms and applications not in a company’s cloud environment, but to self-host these in a private environment managed by the healthcare system. Bringing these in-house for hosting and running ensures that our patient data remains confidential to third-party companies, while still being available for the AI algorithms for analysis.
Once we’ve prevented patient data from leaking outside our network, we still need to address the problem of duplication. There’s no reason to send around the same patient data to individual applications if they can, instead, be intelligently routed to the applications from a single funnel.
So, it doesn’t stop at simply self-hosting their applications in a private environment, but we need to be smart about how we route that data, as well.
Kathleen: You mentioned cloud deployment as a potential solution. Can you share more about what that is and how it’s used in AI?
Ken Ko: The major recent advancements in AI algorithms have been through a combination of collecting massive data sets and using similarly massive amounts of GPU compute power. As a result of this, the vast majority of AI development is done in the cloud, which makes the subsequent deployment or productization of AI in the cloud as well, stemming from the need for simplicity for the team. Considering that AI development is a wildly different skill set from managing infrastructure, from the point of view of an AI development company, the cloud gives them a “deploy once” model where they can minimize the overhead of managing the infrastructure.
The resulting paradigm is one where there is a tacit expectation to push data to the AI models. You can see this client application which needs to phone home or be internet-connected in order to have the heavy lifting and processing be done remotely. Expanding that analogy, it can be appealing for a healthcare system to want to offload those compute requirements and management burdens away from their data center and their people.
Kathleen: Is cloud deployment the answer to the security issues of healthcare AI?
Ken Ko: It’s good for us to step back and see the cloud for what it is: someone else’s data center. Whether it’s Google’s or Amazon’s, or Microsoft’s. They all manage the infrastructure, the physical security, and the cooling of servers. When we think about deploying AI algorithms into “the cloud,” the common understanding is that we’re schlepping patient data from the hospital’s servers, across the internet to another data center, to vendor applications accessed by unknown folks, and we cross our fingers that nothing goes awry, along the way. This isn’t the picturesque panacea that the industry is hoping for or needs.
If we go back and think about the cloud as yet another data center, we can reframe the question and ask ourselves: Can I deploy a logical extension of my environment into that physical data center? And the answer is yes. Creating these private environments or private deployments, locked down with industry standard security practices, sets the stage for us to eat our cake and have it, too.
Related Reading: Guidance on HIPAA and Cloud Computing
Kathleen: Can you share more detail on private deployment and the benefits it provides to health systems?
Ken Ko: In the past two decades, health systems have grown the necessary team and muscles to manage everything from network operation centers to physical servers, to virtualized environments that deploy mission-critical applications like EHRs. Thing is, that’s a large breadth of scope for a team to cover. Given the external factors associated with managing that stack of requirements, extending functionality into another managed data center (the cloud, in this case) can give the IT team the breathing room they need to keep up with the growing pace of software. Allowing the team to focus is key to growing a team’s and an organization’s velocity. This is something we’ve seen time and time again in software development, as well, and is worth bringing into the health IT space.
Katheen: In closing, what would you suggest to a health system that has security concerns but would like to use AI to improve patient care and drive health equity?
Ken Ko: The key here, is to think of the cloud as someone else’s data center. With that, the mental model for “adopting the cloud” is a problem that our industry is very well aware of. And there are strict guidelines and procedures around this, from locked cabinets and encryption-at-rest to audit logs and network traffic monitoring. What’s key is that once we have this logical extension of our network in the cloud, we can allocate hardware resources on-demand instantly and elastically, without needing to ship, wait, unbox, rack, and connect servers.
Once we have that as our foundation, the next vector to mitigate against is this notion of shipping patient data to unknown parties. And for that, the only logical answer is also the simplest: Bring the AI to the data. Deploy in your private cloud-hosted environment.
That’s a wrap on our healthcare AI security discussion. We’d love to hear your thoughts and learn from your experience; please drop us a note in the comment section below.
Interested in learning more about private deployment and how this solution can help your health system securely use AI in patient care? Contact us and
Related Reading: Cybersecurity in the World of Healthcare AI