Cybersecurity in the world of healthcare AI

July 2021 Investor Newsletter

Digital Health Friends –

An epidemic of cyberattacks emerged this past year as hospitals were grappling with the COVID-19 pandemic. 2020 broke records for both the number of cyberattacks that occurred and the volume of data lost due to security breaches.

Here’s one metric that floored me:

It took health systems an average of 280 days to identify and contain a data breach. That’s just 85 days shy of a year

In 2020, ransomware alone resulted in the exposure and theft of protected health information (PHI) for at least 18,069,012 patients. That’s more people than the combined populations of Alaska, DC, Delaware, Hawaii, Idaho, Maine, Montana, Nebraska, New Hampshire, North Dakota, Rhode Island, South Dakota, Vermont, West Virginia, and Wyoming.

With the growth of artificial intelligence and the need for data to fuel machine learning, cybersecurity is one of the most significant challenges faced by the healthcare industry in adopting AI. Here are a few thoughts on this growing security dilemma.

Key takeaways:

  • How health systems are responding to cyber threats and why it’s not working.
     
  • Why cloud computing makes things worse.
     
  • How health systems already know how to solve this problem (but just haven’t realized it yet.)
Yikes... 'network outage...'

The cyberattack that led to a network outage at Scripps Health in San Diego, California was only the latest of many examples of healthcare’s immense cybersecurity vulnerabilities. In early May of 2021, cybercriminals stole data on nearly 150,000 patients. The data breach ranged from names and addresses to clinical information, treatments, patient account numbers, and more. To prevent continued data breaches, Scripps suspended access to its IT applications and underwent a 3-week outage. This forced clinicians to go old school and operate with paper records, significantly impacting the quality of patient care. A similar cyberattack on Universal Health Services saw a $67M loss — not to mention the HIPAA fines yet to come.

This week we learned that Scripps is now being sued by its patients too.

Cyberattackers are ruthless; despite — or rather because of — the pandemic, hackers quickly recognized that health systems are sitting ducks ripe for an attack. They are high revenue, high-impact organizations with archaic security practices that remain 20 years behind the modern technology used by criminals. 

Many health systems have responded by building walls and moats around their current technology to deal with this cyber threat. What they should be doing is modernizing and updating their tech stack to build tools based on present-day technology languages and platforms.

One example of health systems clinging to the past is the use of MUMPS, a programming language developed nearly 60 years ago and still used today in electronic health records. Mitigating the short-term risk at the expense of long-term viability is short-sighted at best. Building a wall and moat isn’t very effective when your enemies have drones, helicopters, and fighter jets.

75% of healthcare insiders are concerned that AI could threaten the security and privacy of patient data.

Essentially, our healthcare is based on a fragile network of brittle, poorly supported systems. Of particular concern are AI applications, where health systems are being asked for access to population-scale patient data by a growing number of AI companies. A recent survey from KPMG highlights both sides of the situation, with 91% of healthcare insiders believing AI is increasing access to care, but at the same time, 75% fear AI could threaten the security and privacy of patient data. 

The AWS boogeyman

When addressing AI security, our developer partners ask us… can’t we just do everything in the cloud? The short answer is no, and here’s why cloud-only is not the best or most secure approach.

Integrating with AI vendors in the cloud means health systems lose visibility and control of patient data once it leaves their data center. This creates problems with auditing and monitoring while increasing the chances of a HIPAA breach. AI applications and vendors are by-definition point-solutions providers. These solutions are incredibly powerful and increasingly easier to build, but they don’t have the standalone security infrastructure to scale as the AI needs of a health system grow.

To a health system, each AI vendor represents:

  • PHI risk and loss of control, as hospital data is housed and managed outside of their secure environment
  • An opening in the health system’s firewall that provides data access to an outside vendor with unknown security practices
  • An additional group of external individuals with access to the hospital’s patient data and network, exposing an opportunity for hackers

Unfortunately, investment in security by the average AI solution provider is very low. We’ve found the overwhelming majority of the 1,200 plus AI solution providers are startups that lack both the financial resources and technical skillset to appropriately address these concerns. They tend to invest their capital in data scientists and launching new products, not on a dedicated engineering team focused on maintaining security and patching vulnerabilities.

What about de-identifying data sent to the cloud? Simply put, it doesn’t work well. Even the best efforts around the de-identification of patient data have proven insufficient for data leaving the healthcare system. De-identification also introduces an additional layer of complexity the health system IT team is then responsible for managing. For the data to be usable, they would need to de-identify the patient data, send it to the vendor, re-identify it, and place it into the appropriate workflow and interface. This is a process that is ripe for mistakes and cyberattacks.

SPOGs: almost as cool as SPACs

The best way to understand what might work in terms of AI security is to look at the tools used by the industry IT teams to handle the management, configuration, and event monitoring for their enterprise devices and operating systemsTheir standard solution has been to use single pane of glass (SPOG) management software, like Microsoft SCCM.

These tools create a dashboard that combines information from various devices — mobile devices, computers, laptops — into a unified display that can be used to quickly change accessibility and security settings. SPOG provides IT teams with a centralized portal that ensures role-based access and that everyone is working from the most up-to-date information.

Health systems can easily leverage their learnings from deploying SPOG to manage the increasing complexity of their AI initiatives. Going forward, they will need to support an assortment of AI vendors accessing and processing various pieces of patient data, with the output utilized by a variety of internal and external stakeholders in different workflows. The SPOG approach could manage this complexity, but has yet to be implemented for AI applications and sadly won’t become a reality if health IT blindly trusts the myriad of cloud-first AI vendors. 

Cybersecurity is one of the hidden forces delaying AI adoption, and healthcare IT teams find themselves in a difficult position. They must juggle ever-growing cybersecurity risks, the diverse AI needs of their clinical and business stakeholders, along with the low-security standards and incompatibility of the typical AI vendor’s cloud-based service model. For health systems to truly harness the power of AI, these conflicting needs will have to be resolved.

Drop me a note if you’d like to discuss AI security challenges and the opportunities being created as the industry evolves

Until next time,

Pelu Tran
CEO, Ferrum health

Contact Us

CASE STUDY

ARA Health Specialists

Use the button below to download your free case study and learn how our approach to validation has improved the number of clinically significant findings in AI software.

CASE STUDY

Sutter Health

Use the button below to download your free case study and learn how our approach to validation has improved the number of clinically significant findings in AI software.