Regulated clinical AI

Evidence infrastructure for AI-enabled medical products.

Glacis helps clinical AI teams generate runtime evidence for PCCP-ready change records, post-market monitoring, drift review, model-change records, and control-execution proof — without moving sensitive clinical data out of their infrastructure.

Why now

Regulated AI medical products need proof from real operation, not after-the-fact documentation.

AI medical products change, drift, touch clinical workflows, and generate outputs that reviewers and health-system buyers will question. Screenshots and retrospective logs are weak evidence when the important question is whether the right controls ran at the right time.

Glacis turns consequential runtime events into signed receipts, then assembles those receipts into evidence packs for regulatory review, PCCP updates, post-market monitoring, and internal quality review.

What gets instrumented

Runtime evidence for the AI lifecycle.

Model-change evidence

Version, policy, threshold, and deployment context tied to the behavior that changed.

Control execution

Which guardrail, review rule, redaction, escalation, or block executed at decision time.

Drift and near misses

Operational patterns that show where performance, population, or workflow behavior is moving.

Post-market proof

Receipts that support lifecycle management, health-system review, and audit readiness.

Runtime artifact

Receipts first. Evidence packs second.

Receipts are generated at runtime. Evidence packs are assembled from receipts.

That distinction keeps the evidence grounded in what the system actually did, not in a document created after the fact.

Workflow
Control
Decision
Receipt
Evidence Pack
AI medical product model update
PCCP change rule and clinical review threshold
Escalated for review
Signed timestamp, policy hash, model version
Regulatory review and lifecycle management artifact

Sensitive environments

Built for PHI and proprietary clinical context.

Glacis runs inside your infrastructure. Local runtime controls generate signed evidence receipts that controls executed and model behavior stayed within defined boundaries — without moving sensitive clinical payloads out of your environment. The receipts carry verification metadata, control outcomes, model and version context, threshold decisions, drift signals, and evidence commitments designed to support review without exposing protected clinical content.

Local runtime controls

Observe, allow, block, redact, escalate, or require review at the AI boundary — executed inside your stack.

Signed evidence receipts

Every consequential decision can carry tamper-evident proof of what ran and which boundary held.

No sensitive payload egress

Hashes, signatures, and verification metadata travel for review — without prompts, outputs, PHI, customer data, code, credentials, or proprietary context leaving your stack.

Secondary route · Agent Runtime Security & Evidence Sprint

A 10-day path to PCCP-readiness and evidence-pack scoping.

Clinical AI products carry agentic surface too — ambient scribes, clinical chatbots, decision-support copilots, and tool-calling workflows all sit inside the same prompt-injection and tool-misuse threat model as horizontal agents. The Sprint is a fixed-scope way to map that surface and stand up the runtime evidence behind it.

What’s in scope

  • Agentic surface mapping for the in-scope clinical AI workflow
  • Prompt-injection and tool-misuse review
  • Runtime control plan and evidence gap map
  • Signed receipt and evidence-pack demonstration on your workflow

Shape

  • Fixed scope: $48k, 10 business days
  • Clinical buyers often frame this as PCCP-readiness or evidence-pack scoping
  • Runs against one regulated AI workflow you nominate
  • Outputs feed change records, post-market monitoring, and drift review

Bring one regulated AI workflow.

We’ll map the runtime evidence your clinical AI product needs for change records, post-market monitoring, drift review, and control-execution proof — without prompts, outputs, PHI, customer data, code, credentials, or proprietary context leaving your stack.

Assess clinical AI evidence readiness