For AI in Financial Services
When the OCC, CFPB, NYDFS, or a bank counterparty asks what your AI workflow actually does at runtime, an MRM binder and a SOC 2 letter aren’t the answer. GLACIS runs inside your infrastructure, instruments one production AI workflow with runtime controls, and produces signed evidence receipts MRM, internal audit, and examiners can verify — with zero sensitive-data egress.
SR 11-7 expects effective challenge, independent validation, and ongoing monitoring. Your model risk program documents all of it — but the guidance was written before generative AI, before non-deterministic outputs, before agentic workflows that call tools and rewrite data.
Examiners and counterparties have shifted from “show me the policy” to “show me the runtime evidence.” OCC, CFPB, and NYDFS inquiries increasingly ask which controls fired on a specific decision, not which controls exist on paper.
The Sprint scopes that runtime layer on one production AI workflow in three weeks — runtime controls, signed receipts, and an evidence pack that maps to your existing MRM framework.
GLACIS runs inside your infrastructure on one production AI workflow — instrumenting runtime controls, signing each control outcome, and packaging an evidence pack MRM and examiners can verify without seeing customer data.
Bias checks, output validation, human-review gates, content filters — the controls your MRM team already specified now execute and emit signed evidence at runtime.
Customer data, model inputs, and proprietary algorithms stay inside your infrastructure. Hashes and signed metadata are the only things that cross the boundary.
Timestamped, third-party witnessed, cryptographically signed receipts assembled into an evidence pack mapped to SR 11-7, fair-lending, and counterparty review language.
Prove your validation tests actually ran against production models. Not recreated for audit, not simulated — the real thing, timestamped and witnessed.
Evidence that your challenge function is operational.
Every control check, every threshold evaluation, every human review decision — captured as verifiable evidence. Continuous, not periodic.
Monitoring you can demonstrate, not just describe.
Prove your bias controls executed on every decision. Cryptographic evidence that fairness checks ran — without exposing individual applications.
Verifiable fair lending, not just attestation.
When you use third-party AI, prove your oversight controls executed. Evidence that you validated vendor outputs, not just that you have a policy to.
Third-party risk management with teeth.
Customer data, model inputs, and proprietary algorithms stay inside your infrastructure. Only signed hashes and verification metadata leave — so MRM and examiners can verify which controls fired without ever seeing what they fired on.
Architecture-level data protection. Not policy — cryptography.
The OCC, Fed, and FDIC are paying attention. The EU AI Act treats credit scoring as high-risk. State regulators are adding AI-specific requirements to existing frameworks.
The pattern is consistent: regulators want evidence that AI governance is operational, not just documented. They want to see that controls executed, not just that they were planned.
Institutions that can demonstrate continuous, verifiable AI governance will face less friction. Those that can’t will face more scrutiny, more MRAs, and more constraints on AI adoption.
The Agent Runtime Security & Evidence Sprint is a fixed-scope $48k engagement — three weeks, one production AI workflow, runtime controls instrumented inside your stack, and an evidence pack mapped to SR 11-7 and counterparty review. No rip-and-replace.