A lightweight sidecar that lives inside your VPC, enforces configurable controls on every AI inference call, and emits independently verifiable receipts. Plaintext PHI stays in your environment. Whether a BAA is required depends on deployment configuration and your organization’s HIPAA analysis.
Zero-egress architecture
The GLACIS sidecar deploys inside your VPC as a Docker container or Kubernetes sidecar. It intercepts AI inference calls, runs your configured controls, and turns control execution into independently verifiable receipts—all designed so plaintext prompts, responses, and PHI never leave your environment.
GLACIS is architecturally incapable of receiving your data
Overhead at standard attestation level
PHI detection pattern categories
OpenAI, Anthropic, Gemini, open source
Not months. Docker or Kubernetes sidecar.
Configurable controls
The sidecar runs your configured controls on every request and response, generating receipts that prove each control executed as configured.
80+ pattern categories for protected health information. Names, MRNs, dates of birth, diagnoses, and dozens more—caught before they reach the model.
Validates consent status before inference execution. Ensures every AI interaction has the required authorization chain.
30+ threat patterns for prompt injection and jailbreak attempts. Blocks adversarial inputs before they reach your model.
Continuous monitoring of control configuration integrity. Detects and evidences any changes to your control settings.
Detection of obfuscated content in Unicode and encoded payloads. Catches hidden instructions embedded in seemingly normal text.
Base64 and multi-layer encoding detection and unwinding. Peels back nested encodings to inspect what’s actually being sent.
Who this is for
AI vendors stuck in hospital security review. Your product works. Their security team won’t sign off. GLACIS gives them independently verifiable evidence that controls ran—not just a promise that they will.
Agent developers who need governance infrastructure but don’t have it. You’re building AI agents, not compliance tooling. Embed GLACIS and get the governance layer your customers require without building it yourself.
Digital health companies deploying AI into clinical workflows. When PHI touches AI inference, you need evidence that the right controls ran. Every time. On every call.
Any organization where PHI touches AI inference. If protected health information is anywhere near an AI model, you need zero-egress architecture and evidence to prove it.
For agent developers
Your customers get independently verifiable proof that controls ran. You get through security review. Embed GLACIS into your agent infrastructure and ship with confidence.
Drop the sidecar into your agent infrastructure. One container, standard API interface.
Every inference gets a cryptographic evidence record. Third-party witnessed, independently verifiable.
Hand your customer an evidence trail. Get through security review. Close the deal.
Pricing & timeline
Per year, per deployment environment
Live in your environment—not months of integration work
Frequently asked questions
Your data—prompts, responses, patient information—never leaves your VPC. The GLACIS sidecar processes everything locally. Only cryptographic hashes cross the trust boundary to our independent witness for evidence recording.
The GLACIS sidecar is designed so that plaintext PHI stays in your environment — only cryptographic commitments cross the trust boundary. Whether a BAA is required depends on your specific deployment configuration and your organization’s HIPAA analysis. This architecture is designed to minimize BAA scope, not to bypass it.
The GLACIS sidecar deploys as a Docker container or Kubernetes sidecar within your existing infrastructure. Typical deployment takes days, not months.
Any provider that accepts HTTP API calls—OpenAI, Anthropic, Google Gemini, Azure OpenAI, and any open-source model with an API interface.
Sub-10ms at standard attestation level. Configurable based on your throughput requirements and evidence depth needs.
Complete governance stack
Assess
3–4 week governance assessment benchmarked against ISO 42001 and NIST AI RMF.
Book an assessmentComply
The compliance platform purpose-built for AI systems. Multi-framework mapping, evidence generation, OSCAL export.
Request a demo