August 19, 2025
Min Read

Al in Healthcare: Insights on Staying Ahead Without Risking Everything

The Shadow AI Problem

Picture this scenario: A chief nursing officer attends a conference and discovers an AI-powered workforce management system that promises to solve staffing challenges. Excited by the potential, they purchase and deploy the solution without involving IT security. Meanwhile, the hospital's 15-20 year old software systems are quietly receiving AI updates from vendors, embedding machine learning capabilities that no one in leadership knows about.

This is the current state of AI adoption in healthcare - a perfect storm of well-intentioned innovation and inadequate governance. Department heads across hospitals are making technology decisions in isolation, each trying to solve their specific pain points while inadvertently creating organization-wide security risks.

The Chief Information Security Officer (CISO) often learns about these AI deployments only after they're operational, facing the uncomfortable reality that patient data may already be flowing to third-party AI vendors without proper oversight or consent mechanisms.

The Data Synthesis Dilemma

The fundamental challenge with healthcare AI lies in how these systems operate. Unlike traditional software that processes data and returns results, AI systems often synthesize patient information with external datasets to improve their models. This creates several unprecedented risks:

Data Ownership Confusion: When patient data gets synthesized with third-party AI training data, determining ownership becomes nearly impossible. Traditional cloud contracts include provisions for data return and destruction upon contract termination, but AI systems can't simply "return" synthesized data.

HIPAA Compliance Gaps: Healthcare AI vendors may be accessing Electronic Medical Records and patient data without proper consent mechanisms. The current regulatory framework, while comprehensive, contains gray areas regarding AI usage that leave organizations vulnerable to compliance violations.

Re-identification Risks: Perhaps most concerning, AI systems can potentially re-identify anonymized patient data by correlating it with large external datasets. This capability turns supposedly anonymous information back into identifiable patient records, creating new categories of privacy violations.

The Scale and Complexity Challenge

The scope of this problem becomes clear when examining the technology landscape of modern healthcare organizations. A typical 1,000-bed hospital operates between 200-400 clinical and healthcare applications. Large health systems with 35,000-45,000 employees often run over 600 clinical applications simultaneously.

Each of these applications represents a potential AI integration point. Vendors are embedding AI capabilities into existing solutions, upgrading legacy systems with machine learning features, and introducing entirely new AI-native platforms. The sheer volume makes comprehensive oversight nearly impossible with traditional governance approaches.

Adding to this complexity, over 90% of most organizations' AI exposure is external to their direct control. Healthcare systems are consuming AI services from dozens of vendors, each with their own data handling practices, security protocols, and risk profiles.

Agentic AI: Automation at Warp Speed

The emergence of agentic AI - systems that can autonomously perform tasks and make decisions - introduces another layer of risk. These systems excel at integrating disparate data sources and delivering information at unprecedented speeds. In healthcare environments with hundreds of applications, agentic AI can quickly access and synthesize information from multiple systems to answer queries or automate workflows.

While this capability offers tremendous operational benefits, it also amplifies the potential for damage when things go wrong. A recent breach affecting 450,000 patient records occurred when an AI-powered workflow automation system had misconfigured integrations. The speed

and automation capabilities that make agentic AI valuable also enabled the breach to affect far more records than traditional manual processes might have allowed. AI introduces speed and capabilities to deliver bad consequences much quicker than traditional integrations that might take months to implement and secure properly.

The Governance Gap

Perhaps the most frustrating aspect of the current situation is the disconnect between policy and practice. Many healthcare organizations have developed comprehensive AI governance policies, complete with acceptable use guidelines, procurement requirements, and risk assessment frameworks. These policies often look impressive on paper but fail in implementation.

The reality is that healthcare organizations lack the tools and processes to monitor compliance with their AI policies. Employees routinely bypass sanctioned enterprise AI tools in favor of free consumer versions, creating uncontrolled data exposure while organizations waste money on unused enterprise licenses.

IT leaders report purchasing hundreds of licenses for enterprise AI tools, only to discover that employees prefer using free consumer AI services on their personal devices. This shadow IT phenomenon means patient data may be flowing to unsecured AI platforms while expensive, properly configured enterprise tools sit unused, resulting in wasted spend.

A Framework for the Future

Addressing healthcare AI security requires a multi-faceted approach that balances innovation with protection. Here’s a framework to guide your approach:

Start with a Basic Assumption: Security leaders should assume AI is already proliferating throughout their organizations. Rather than trying to detect AI usage, organizations should implement governance frameworks that can accommodate the reality of widespread AI adoption.

Implement Comprehensive Risk Assessment: The National Institute of Standards and Technology (NIST) has published an AI Risk Management Framework that provides guidance for evaluating AI systems across multiple risk dimensions: cybersecurity, privacy, data security, and compliance.

Establish Visibility and Monitoring: Organizations need tools and processes to discover and catalog AI usage across their environments. This includes both sanctioned AI deployments and shadow AI implementations that may be operating without oversight.

Focus on Data Classification and Control: Since AI systems are inherently data-hungry, healthcare organizations must implement robust data classification and access controls that can adapt to AI's dynamic data requirements.

Educate Clinical Staff: Non-technical clinicians need to understand the security implications of AI tools. This education should focus on concrete consequences rather than abstract risk concepts, helping healthcare workers understand how AI security failures could impact patient care and organizational stability. Use stories to highlight the real-world impact of inadequate security practices.

Conclusion

Healthcare organizations stand at a critical juncture. AI offers unprecedented opportunities to improve patient outcomes, streamline operations, and enhance clinical decision-making. However, the current pace of AI adoption is outstripping organizational capacity to implement proper security and governance controls.

The organizations that successfully navigate this challenge will be those that acknowledge the reality of widespread AI adoption and implement governance frameworks designed for this new landscape. Those that continue to approach AI security with traditional tools and methodologies risk exposing patient data, violating regulatory requirements, and ultimately compromising the trust that is fundamental to healthcare.

The question isn't whether AI will transform healthcare. It already has. The question is whether healthcare organizations will implement the security and governance frameworks needed to ensure this transformation protects rather than endangers the patients they serve. And with AI, the first step is knowing when it’s in your environment by getting visibility into its use, whether sanctioned or otherwise.

What are your numbers?

Get an assessment of AI interactions with trusted internal resources. Measure risk based on context, and identify controls to mitigate threats.

Request a Live Product Demo Now

By submitting this form, you are agreeing to our Terms & Conditions and Privacy Policy.

Your Request has been Successfully Submitted

Thank you. Our team will contact you shortly.
Oops! Something went wrong while submitting the form.