Runtime Assurance Platform · Evidence receipt layer · Live now
The runtime evidence layer for AI systems that act
Continuous attestation is the steady stream of signed evidence receipts emitted by local runtime controls inside your infrastructure. Receipts are continuously assembled into evidence packs your board, your customers, and your regulators can verify — with zero sensitive-data egress.
- glc_a3f1c98e02 sha256:7bd1e94c8a…3f02 allowed
- glc_b27e4d11ab sha256:c019fb5d72…a8e1 allowed
- glc_4f0892ce6d sha256:9e5a16d8c4…b7d2 flagged
- glc_dc73a5602f sha256:b48f02ad19…5c0f allowed
- glc_91eb22a7d3 sha256:5d2c8e91b6…e304 withheld
Zero sensitive-data egress*
Sidecar mode
Inline enforcement
Shadow to enforce
Tamper-proof
Crypto signatures
<50ms
Total overhead (p95)
Signed receipts, zero sensitive-data egress
GLACIS runs inside your infrastructure. Local runtime controls emit signed evidence receipts — verification metadata, hashes, and signatures, not payloads — that are independently verifiable without exposing the underlying content. Sensitive data never leaves your environment.
S3 object lock / WORM storage
Proves your logs weren’t modified after storage. Doesn’t prove the de-identification actually executed before data hit the model.
Continuous attestation
Cryptographic proof generated at the moment of execution. GLACIS records control execution locally before data reaches the model — not just that logs exist.
How it works
Every time your AI acts, local runtime controls inside your infrastructure emit a signed evidence receipt for the decision that ran.
Request arrives
An AI request enters the GLACIS arbiter. The arbiter sits inline in your request path — every interaction passes through it before reaching your model or returning to the user.
Controls execute
Safety controls run: content filtering, bias checks, PII detection, consent verification. Each control’s outcome is recorded as it executes.
Policy enforced
The arbiter evaluates your active governance posture and renders a decision: PERMIT, DENY, escalate, or flag. The decision is applied inline — non-compliant requests are blocked before they reach the model.
Evidence sealed
A cryptographic attestation is generated — signed, timestamped, and chained. Any attempt to modify, delete, or reorder records is cryptographically detectable. The evidence integrity is mathematically provable.
Auditors verify
Auditors, customers, or regulators can independently verify any attestation. No trust required in GLACIS or your organization. The math proves it.
Deployment modes
Start observing. Transition to enforcement when you’re ready. Every mode change is itself attested.
Shadow
Observe all traffic, evaluate against policy, generate receipts. Never block. Perfect for baselining your governance posture before enforcement.
Warn
Evaluate and alert on policy violations. Generate receipts with violation flags. Don’t block requests — let your team review before enabling enforcement.
Enforce
Block policy violations with denial receipts. Permit compliant requests. Every decision — permit and deny — is independently attested.
Strict
Block violations and circuit-break when violation thresholds are exceeded. For environments where policy breaches require immediate pipeline shutdown.
Failure modes
You declare how your system behaves when the arbiter is unavailable. It’s your choice, not ours.
Fail-closed
DefaultRequests are denied if the arbiter is unavailable. Safety takes priority over availability. No request proceeds without governance evaluation.
Fail-open
ConfigurableRequests proceed with a flag if the arbiter is unavailable. Availability takes priority. The unevaluated request is logged and flagged for retroactive review.
Why this matters
Traditional approach
- Annual audits sample a fraction of interactions
- Policies say what should happen
- Logs can be altered after the fact
- Months between control check and evidence
Continuous attestation
- Every AI interaction generates proof
- Attestations prove what actually happened
- Cryptographic signatures prevent tampering
- Evidence generated at time of execution
What you can prove
Safety controls
Content filtering, harmful output detection, and safety controls executed on every inference.
Bias testing
Fairness checks ran on model outputs with verifiable test parameters and results.
Data privacy
PII detection, data masking, and access controls applied before data reaches the model.
Audit trails
Complete, immutable record of who accessed what, when, and what the AI did with it.
Model versioning
Proof of exactly which model version processed each request. No confusion about what ran.
Response times
Latency and performance metrics with cryptographic timestamps. SLA compliance evidence.
Mapped to frameworks you need
Attestations automatically map to the compliance frameworks your customers and regulators require.
Stand up runtime evidence on one production workflow
The Agent Runtime Security & Evidence Sprint — $48k, ten business days, one named workflow — deploys local runtime controls and ships a signed evidence pack you can hand to your board, your customers, or your auditors.
Related resources
Evidence pack sprint
Board-ready compliance evidence in days, not months.
The proof gap
Why documentation isn’t enough for AI compliance.
NIST AI RMF guide
Complete guide to implementing the AI Risk Management Framework.
What is AI attestation?
Cryptographic proof that your AI controls executed as designed.
AI audit guide
Preparing for third-party AI compliance audits.