Jul 3
2025
Can AI Coexist With HIPAA? How Collaboration Can Solve the Tech-Compliance Conundrum

By John Murray, senior director, SAP.
From the dawn of the Internet to the advent of electronic health records, the healthcare industry historically has been slow to embrace new technologies and the improvements they can bring. One reason is the perceived risks associated with these technologies. Another is the perceived costs of implementing them.
The rise of cloud computing and artificial intelligence presents healthcare providers — traditional ones like hospitals and health systems, along with medical device providers and other entities that meet the “provider” definition — presents the industry with a similar tech conundrum. As new players join more conventional providers in reshaping the patient care ecosystem, opportunities abound for them to leverage the cloud, AI and other tools to reinvent healthcare business processes, services and the patient experience.
But with those upside opportunities come potential new risks and costs, including compliance challenges with HIPAA, a law that doesn’t readily reconcile with technologies like AI or cloud computing, which weren’t around when it was promulgated, nor with the growing diversity of entities now defined as patient care providers.
For this growing class of providers, the applications for AI and other intelligent technologies are indeed promising, for things like predicting certain elevated risks for patients, diagnosing issues and recommending treatments. Generative AI (genAI) copilots driven by large language models could support decision-making about diagnoses and treatments. GenAI also shows great promise for improving clinician and clinical productivity. As versatile as it is, AI also can help companies manage their compliance responsibilities — and the data required to meet them — across multiple jurisdictions.
What’s more, AI shows potential for connecting patient health with marketing, where, for example, based on an analysis of patient data, AI-powered capabilities serve shopping list recommendations to patients for vitamins, supplements, over-the-counter medications, etc., when they’re in-store or shopping online. This intelligent health-based marketing looks like a highly promising frontier for companies that can get it right.
Risk and reward
AI’s huge potential clearly isn’t lost on healthcare companies. In a 2024 survey of 100 upper-level U.S. healthcare execs conducted by McKinsey, 72% of respondents said their organizations are either already using genAI tools or are testing them. Another 17% said they were planning to pursue genAI proof of concepts. And now their AI investments have begun to pay off. About 60% of those who have implemented gen AI solutions are either already seeing a positive ROI or expect to.
This growing embrace of AI and cloud computing introduces a whole new set of issues, risks and responsibilities that healthcare providers — and their regulators — must contemplate. Ensuring patient privacy and data security in compliance with HIPAA is perhaps the most pressing of those issues. Because HIPAA became law in 1996, well before Amazon, Google, the cloud and AI entered the tech mainstream, and well before medical device companies, insurers and the Walmarts of the world were providing some form of care directly to patients, its provisions aren’t equipped to discern how compliance responsibilities and liability should be shared among the various parties that now touch patient data, including covered entities and their business associates. As the definition of “provider” changes, companies in many more industries now may touch patient data in some way.
The increasing use of AI by patient care providers brings new categories of associated entities into the compliance mix. That includes the hyperscalers that host the cloud-based AI capabilities and large language models providers are using, the software/tech companies that build and sell these systems, and the system integrators that are helping providers implement them. Who’s liable for a data breach? Who owns the risk associated with protecting patient information in this broader care ecosystem? It is a true legal quagmire with few clear answers.
The perception of AI as an untested technology (at least in a healthcare context) is also part of the risk equation. How to address potential bias and hallucination risk in large language models, for example? The cost of implementing cloud-based AI and other tech infrastructure, and internal resistance to embracing these new technologies, also factor into that equation.
Maximizing tech’s potential
A 2023 article in the Harvard Business Review contends that implementing cloud-based AI capabilities in a way that’s compliant will require extensive cooperation among stakeholders across the healthcare landscape. “Payers, health systems, and providers need to come to a common understanding about when it is appropriate to use an AI application, how it should be used, and how potential side effects will be identified and mitigated.”
That’s a necessary and worthwhile undertaking, the article’s author concludes. “It would be sadly ironic if the U.S. health sector lagged in reaping the benefits of this transformative new technology.”
The challenge here is a huge one: establishing widely accepted practices, standards and guardrails around cloud computing and AI so regulation can catch up to and keep pace with technology and the ethical and security issues it raises, as well as with the shifting patient care ecosystem.
The most viable vehicle for doing so, at least here in the U.S., could be to establish some kind of broad stakeholder consortium, perhaps led by the U.S. government (the FDA and/or HHS, for example), and including medical colleges/boards, along with covered entities and their business associates under HIPAA. The goal: develop consensus about how the responsibilities and liabilities associated with HIPAA will be divided and executed in the AI era.
A broader embrace of the cloud and AI within the patient care ecosystem increases the universe of covered entities and business associates that likely will be touching or at least have some role, direct or indirect, in the handling of patient data. That in turn necessitates formation of business networks, within which data can flow unimpeded, transparently and securely between relevant entities in the patient care ecosystem.
So, for instance, in the case of cell and gene therapies, a business network would enable the various stakeholders handling a patient’s treatment, from drawing a blood sample to producing, delivering and administering the actual therapy, to securely connect to share and analyze information in a timely and compliant way to yield the best possible patient outcome. Each member of the value chain thus must have the security and data-management capabilities in place to viably participate in such a network. This same concept would also apply to clinical networks.
As daunting as some of this may sound, technology like AI will not stand still. So neither should members of the patient care value chain in laying the necessary groundwork — standards, networks, etc. — to take full advantage of intelligent technologies in a way that’s compliant, profitable and most importantly, beneficial for patients.