The Problem
“We have 50+ AI vendors knocking on our door. Radiology wants one thing, pathology wants another, nursing wants three more. My team can barely keep up with regular security reviews, let alone AI-specific runtime evidence reviews.”
— Health System CISO
No Time
Security teams are stretched thin. Every new AI vendor means another 40-page questionnaire and no time to read what their controls actually do at runtime.
No Playbook
Traditional security frameworks weren’t built for AI. SOC 2 doesn’t cover runtime guardrails, hallucination risk, or whether the vendor’s controls actually fired on your traffic.
Real Risk
AI incidents are becoming litigation. Recent cases show why signed evidence receipts of consent capture and runtime guardrails matter at the deposition table.
Beyond Compliance Theater
“Compliance” is too small a word for what you need. The runtime evidence question runs through the full AI vendor lifecycle: intake, governance, runtime monitoring, evidence-pack readiness, and action.
Intake
A runtime-evidence questionnaire for AI vendors. The questions to ask about runtime controls, signed receipts, and evidence packs before a clinical team commits to a pilot.
Governance
Clear policies for AI use. Which vendors are approved, for which use cases, with which data access — and which controls must run at runtime to keep that approval.
Runtime Monitoring
Ongoing visibility into vendor AI behavior through signed evidence receipts. Are the controls the vendor described actually firing on your traffic? Is the model behaving inside the agreed scope?
Evidence Packs
Evidence packs that map to HIPAA, state AI laws, and your AI committee charter — signed runtime receipts, not vendor attestations.
Action
When something goes wrong, you need a defensible record fast. Signed receipts, incident response, vendor remediation, evidence handoff to legal and risk.
How GLACIS Helps
Runtime-Evidence Vendor Questionnaire
A repeatable process for asking AI vendors the runtime questions a model card and SOC 2 letter don’t cover.
- Runtime-controls questionnaire for AI vendors
- Risk scoring methodology aligned to clinical use
- Red flags to watch for in vendor responses
Evidence-Pack Readiness
Know what evidence to demand. Policy docs and pen tests aren’t enough — the vendor needs to show signed runtime evidence receipts.
- Signed evidence receipts (controls fired, not just exist)
- Model performance and refusal-rate evidence
- Data-handling proofs and zero clinical payload egress
Governance Policy Templates
Policy frameworks tailored for health system AI governance — built around runtime evidence, not paper attestations.
- AI acceptable use policy
- Vendor approval workflow with runtime-evidence gates
- Incident response playbook with signed receipts
Ongoing Support
AI vendor governance isn’t a one-time project. Regulations change, vendors update their systems, and runtime behavior drifts. We help you stay current.
- Regulatory updates (Colorado, EU AI Act, state laws)
- Vendor re-assessment triggers
- Best practices from peer health systems
What’s Coming
State and federal regulators are moving fast. Health systems that deploy vendor AI without runtime evidence of governance are taking on significant liability.
| Regulation | Impact | Date |
|---|---|---|
| Colorado AI Act | Impact assessments for high-risk AI in healthcare | June 2026 |
| Texas HB 1709 | Written disclosure to patients when AI used | Jan 2026 |
| EU AI Act | High-risk classification for most healthcare AI | Aug 2026 |
| HHS HIPAA Update | AI systems must be in risk analysis | Proposed |
Agent Runtime Security & Evidence Sprint
Run an evidence-pack readiness sprint on a vendor’s AI offering.
Fixed scope. 10 business days. One vendor offering you’re actually about to sign — runtime controls scoped against your environment, and an evidence pack your AI committee, security team, and legal can review.
$48k fixed — $36k for the first three founder design partners (100% upfront, reference permission).