Runtime Assurance Platform · Evidence receipt layer · Live now

The runtime evidence layer for AI systems that act

Continuous attestation is the steady stream of signed evidence receipts emitted by local runtime controls inside your infrastructure. Receipts are continuously assembled into evidence packs your board, your customers, and your regulators can verify — with zero sensitive-data egress.

OVERT receipt stream
policy:clinical.note.v2.4 · witness ed25519
  1. glc_a3f1c98e02 sha256:7bd1e94c8a…3f02 allowed
  2. glc_b27e4d11ab sha256:c019fb5d72…a8e1 allowed
  3. glc_4f0892ce6d sha256:9e5a16d8c4…b7d2 flagged
  4. glc_dc73a5602f sha256:b48f02ad19…5c0f allowed
  5. glc_91eb22a7d3 sha256:5d2c8e91b6…e304 withheld
chain depth 1,284 · verifier overt.is
Define your posture declarative policies
We enforce & witness every decision attested
You get evidence third-party verified

Zero sensitive-data egress*

Sidecar mode

Inline enforcement

Shadow to enforce

Tamper-proof

Crypto signatures

<50ms

Total overhead (p95)

Signed receipts, zero sensitive-data egress

GLACIS runs inside your infrastructure. Local runtime controls emit signed evidence receipts — verification metadata, hashes, and signatures, not payloads — that are independently verifiable without exposing the underlying content. Sensitive data never leaves your environment.

Pango investigating

S3 object lock / WORM storage

Proves your logs weren’t modified after storage. Doesn’t prove the de-identification actually executed before data hit the model.

Continuous attestation

Cryptographic proof generated at the moment of execution. GLACIS records control execution locally before data reaches the model — not just that logs exist.

How it works

Every time your AI acts, local runtime controls inside your infrastructure emit a signed evidence receipt for the decision that ran.

1

Request arrives

An AI request enters the GLACIS arbiter. The arbiter sits inline in your request path — every interaction passes through it before reaching your model or returning to the user.

2

Controls execute

Safety controls run: content filtering, bias checks, PII detection, consent verification. Each control’s outcome is recorded as it executes.

3

Policy enforced

The arbiter evaluates your active governance posture and renders a decision: PERMIT, DENY, escalate, or flag. The decision is applied inline — non-compliant requests are blocked before they reach the model.

4

Evidence sealed

A cryptographic attestation is generated — signed, timestamped, and chained. Any attempt to modify, delete, or reorder records is cryptographically detectable. The evidence integrity is mathematically provable.

5

Auditors verify

Auditors, customers, or regulators can independently verify any attestation. No trust required in GLACIS or your organization. The math proves it.

Deployment modes

Start observing. Transition to enforcement when you’re ready. Every mode change is itself attested.

Shadow

Observe all traffic, evaluate against policy, generate receipts. Never block. Perfect for baselining your governance posture before enforcement.

Warn

Evaluate and alert on policy violations. Generate receipts with violation flags. Don’t block requests — let your team review before enabling enforcement.

Enforce

Block policy violations with denial receipts. Permit compliant requests. Every decision — permit and deny — is independently attested.

Strict

Block violations and circuit-break when violation thresholds are exceeded. For environments where policy breaches require immediate pipeline shutdown.

Failure modes

You declare how your system behaves when the arbiter is unavailable. It’s your choice, not ours.

Fail-closed

Default

Requests are denied if the arbiter is unavailable. Safety takes priority over availability. No request proceeds without governance evaluation.

Fail-open

Configurable

Requests proceed with a flag if the arbiter is unavailable. Availability takes priority. The unevaluated request is logged and flagged for retroactive review.

Why this matters

Traditional approach

  • Annual audits sample a fraction of interactions
  • Policies say what should happen
  • Logs can be altered after the fact
  • Months between control check and evidence

Continuous attestation

  • Every AI interaction generates proof
  • Attestations prove what actually happened
  • Cryptographic signatures prevent tampering
  • Evidence generated at time of execution
Pango with proof

What you can prove

Safety controls

Content filtering, harmful output detection, and safety controls executed on every inference.

Bias testing

Fairness checks ran on model outputs with verifiable test parameters and results.

Data privacy

PII detection, data masking, and access controls applied before data reaches the model.

Audit trails

Complete, immutable record of who accessed what, when, and what the AI did with it.

Model versioning

Proof of exactly which model version processed each request. No confusion about what ran.

Response times

Latency and performance metrics with cryptographic timestamps. SLA compliance evidence.

Mapped to frameworks you need

Attestations automatically map to the compliance frameworks your customers and regulators require.

Pango guiding
NIST AI RMF
72 subcategories
ISO 42001
AI Management System
EU AI Act
High-risk requirements
HIPAA
Healthcare AI controls
Pango ready to protect

Stand up runtime evidence on one production workflow

The Agent Runtime Security & Evidence Sprint — $48k, ten business days, one named workflow — deploys local runtime controls and ships a signed evidence pack you can hand to your board, your customers, or your auditors.

Get started

Start with one high‑risk AI workflow.

Book a focused Agent Runtime Security & Evidence Sprint, then deploy runtime assurance where the risk is real.

From assessment to platform deployment. See pricing →