Back to Blog

Why Your SOC 2 Won't Protect You From AI Risk

The dangerous assumption: "We have SOC 2 Type II and HITRUST certification. Our AI systems are compliant." I hear this constantly from healthcare vendors. It reflects a fundamental misunderstanding of what these frameworks actually cover—and what they don't.

What SOC 2 and HITRUST Actually Cover

SOC 2 and HITRUST are excellent frameworks for IT security. They address:

  • Access controls and identity management
  • Network security and encryption
  • Change management and deployment processes
  • Incident response and business continuity
  • Physical security and data center controls

These are critical. Every healthcare organization should require them from vendors. But they were designed for traditional IT systems—databases, web applications, infrastructure. They weren't designed for AI.

The AI-Specific Risks They Miss

AI systems introduce risks that traditional IT compliance frameworks weren't built to address:

Risk CategorySOC 2 / HITRUSTAI-Specific Need
Model HallucinationsNot addressedGuardrail execution evidence
Prompt InjectionNot addressedInput validation attestation
Training Data BiasNot addressedBias testing documentation
Model DriftNot addressedPerformance monitoring
Decision ExplainabilityNot addressedInference-level logging
Data EncryptionCoveredCovered
Access ControlsCoveredCovered
Incident ResponseCoveredAI-specific extension needed

The Specific Gaps

1. No Inference-Level Accountability

SOC 2 requires logging of system access and administrative actions. It doesn't require logging of individual AI inferences. When an AI makes a clinical decision, SOC 2 doesn't mandate that you capture what went in, what came out, and why.

2. No Guardrail Verification

You might have guardrails. SOC 2 might even audit that they exist. But it doesn't require evidence that they executed for specific inferences. "We have a content filter" is very different from "Here's proof the content filter ran for this request."

3. No Model Behavior Documentation

SOC 2 audits your change management process. It doesn't audit whether your AI model behaves consistently, whether it's drifting over time, or whether you can reproduce its behavior for a specific input.

4. No Third-Party Verifiability

SOC 2 produces an attestation that auditors verified your controls. But that attestation doesn't let a third party verify specific AI decisions. It's a statement about your processes, not evidence of specific executions.

The bottom line: SOC 2 tells you the vendor has good IT hygiene. It doesn't tell you their AI won't hallucinate, leak data through prompts, or make inexplicable decisions that harm patients.

What Healthcare Organizations Actually Need

For AI specifically, healthcare organizations need evidence that addresses the unique risks of machine learning systems:

  • Guardrail execution traces — proof that safety controls ran for specific inferences
  • Model version attestation — cryptographic proof of which model version processed a request
  • Decision reconstruction capability — ability to recreate the context for any AI output
  • Bias and fairness documentation — evidence of testing across demographic groups
  • Third-party verifiable evidence — not just attestations, but proof that can be independently validated

The Framework Convergence

The good news: new frameworks are emerging to address AI-specific risks:

  • NIST AI RMF — the emerging standard for AI risk management
  • ISO 42001 — AI management system certification
  • EU AI Act — regulatory requirements for high-risk AI (including healthcare)

The challenge: these frameworks require evidence that most organizations can't currently produce. The infrastructure to capture, store, and verify AI behavior at inference-level doesn't exist in most deployments.

Beyond SOC 2

Our white paper "The Proof Gap in Healthcare AI" details exactly what AI-specific evidence looks like—and the four pillars every healthcare organization should demand.

Read the White Paper

What to Ask Your Vendors

When evaluating AI vendors, don't stop at "Are you SOC 2 certified?" Ask:

  • Can you show me which guardrails executed for a specific inference?
  • How do you prove model version for historical requests?
  • Can a third party verify your AI's behavior without trusting your internal logs?
  • What's your mapping to NIST AI RMF controls?
  • How are you preparing for EU AI Act Article 12 requirements?

The vendors who can answer these questions are building for the future. The vendors who point to their SOC 2 report are building for 2019.

The Complementary Approach

To be clear: SOC 2 and HITRUST remain essential. You should absolutely require them. They're table stakes for any vendor handling sensitive data.

But for AI systems, they're the beginning, not the end. Healthcare organizations need both traditional IT compliance AND AI-specific evidence. The vendors who understand this distinction are the ones worth talking to.

For the complete framework on what to demand from AI vendors, read our white paper.