For AI in Financial Services

Runtime evidence for AI in regulated financial workflows.

When the OCC, CFPB, NYDFS, or a bank counterparty asks what your AI workflow actually does at runtime, an MRM binder and a SOC 2 letter aren’t the answer. GLACIS runs inside your infrastructure, instruments one production AI workflow with runtime controls, and produces signed evidence receipts MRM, internal audit, and examiners can verify — with zero sensitive-data egress.

The model-risk gap on generative AI

SR 11-7 expects effective challenge, independent validation, and ongoing monitoring. Your model risk program documents all of it — but the guidance was written before generative AI, before non-deterministic outputs, before agentic workflows that call tools and rewrite data.

Examiners and counterparties have shifted from “show me the policy” to “show me the runtime evidence.” OCC, CFPB, and NYDFS inquiries increasingly ask which controls fired on a specific decision, not which controls exist on paper.

The Sprint scopes that runtime layer on one production AI workflow in three weeks — runtime controls, signed receipts, and an evidence pack that maps to your existing MRM framework.

How the model-risk evidence sprint runs

GLACIS runs inside your infrastructure on one production AI workflow — instrumenting runtime controls, signing each control outcome, and packaging an evidence pack MRM and examiners can verify without seeing customer data.

Runtime controls in your stack

Bias checks, output validation, human-review gates, content filters — the controls your MRM team already specified now execute and emit signed evidence at runtime.

Zero sensitive-data egress

Customer data, model inputs, and proprietary algorithms stay inside your infrastructure. Hashes and signed metadata are the only things that cross the boundary.

Evidence pack for MRM & examiners

Timestamped, third-party witnessed, cryptographically signed receipts assembled into an evidence pack mapped to SR 11-7, fair-lending, and counterparty review language.

Where evidence matters most

Model validation

Prove your validation tests actually ran against production models. Not recreated for audit, not simulated — the real thing, timestamped and witnessed.

Evidence that your challenge function is operational.

Ongoing monitoring

Every control check, every threshold evaluation, every human review decision — captured as verifiable evidence. Continuous, not periodic.

Monitoring you can demonstrate, not just describe.

Fair lending compliance

Prove your bias controls executed on every decision. Cryptographic evidence that fairness checks ran — without exposing individual applications.

Verifiable fair lending, not just attestation.

Vendor AI oversight

When you use third-party AI, prove your oversight controls executed. Evidence that you validated vendor outputs, not just that you have a policy to.

Third-party risk management with teeth.

Zero sensitive-data egress.

Customer data, model inputs, and proprietary algorithms stay inside your infrastructure. Only signed hashes and verification metadata leave — so MRM and examiners can verify which controls fired without ever seeing what they fired on.

Customer data Never transmitted
Model inputs/outputs Hashed locally only
Proprietary algorithms Never transmitted
Cryptographic commitments Yes (metadata only)

Architecture-level data protection. Not policy — cryptography.

The regulatory direction is clear

The OCC, Fed, and FDIC are paying attention. The EU AI Act treats credit scoring as high-risk. State regulators are adding AI-specific requirements to existing frameworks.

The pattern is consistent: regulators want evidence that AI governance is operational, not just documented. They want to see that controls executed, not just that they were planned.

Institutions that can demonstrate continuous, verifiable AI governance will face less friction. Those that can’t will face more scrutiny, more MRAs, and more constraints on AI adoption.

Scope a model-risk evidence sprint
on one production AI workflow.

The Agent Runtime Security & Evidence Sprint is a fixed-scope $48k engagement — three weeks, one production AI workflow, runtime controls instrumented inside your stack, and an evidence pack mapped to SR 11-7 and counterparty review. No rip-and-replace.