GLACIS·EU AI Act series·CISO·Updated April 2026

EU AI Act for the CISO.

A walkthrough of Articles 12, 15 and 53 framed for the security function — record-keeping architecture, adversarial-input resilience, post-market monitoring, and the GPAI safety/security chapter that decides which upstream model providers can be used at all.

Book the Agent Runtime Security Sprint Series hub →
CISO Head of security engineering SecOps AI red team
Article 12
Automatic record-keeping — the line GLACIS attests
Article 15
Accuracy, robustness, cybersecurity — adversarial-input resilience
Article 53
Post-market monitoring — drift, anomalies, incident triggers
GPAI
Safety and Security chapter of the Code of Practice — vendor due diligence
What changed in April 2026

The Digital Omnibus on AI is in trilogue with proposed delays to 2 December 2027 (stand-alone) and 2 August 2028 (embedded). For the security function this is preparation room, not relief — the Article 12 logging architecture, Article 15 robustness testing, and Article 73 serious-incident workflow still need a working baseline by 2 August 2026 if the Omnibus stalls.

The GPAI Code of Practice is now signed by roughly two dozen providers. Anthropic, Google, IBM, Microsoft, Mistral, OpenAI, Cohere, Aleph Alpha and others signed in full; Meta did not sign; xAI signed only the Safety and Security chapter. For procurement and vendor risk, the chapter-level signature record is the practical input — a model provider that did not sign Safety and Security is a different risk profile than one that did.

By Joe Braidwood·12 min read·Updated April 24, 2026

Executive summary

The Regulation places the security function at the centre of compliance. Article 12 contemplates automatic record-keeping. Article 15 mandates adversarial-input resilience and recovery measures. Article 73 imposes a 15-day serious-incident reporting clock. Together with Article 53 post-market monitoring, the result is an SIEM-shaped problem: continuous evidence that controls executed, surfaced on demand to a notified body or competent authority.

Article 74 gives market-surveillance authorities the right to request training data, source code and runtime logs. The compliance posture stops being “we have a policy” and becomes “we can produce the evidence on demand”. For high-risk systems, non-compliance carries up to €15M or 3% of global turnover.

This guide maps the CISO obligations to specific Articles, gives an SIEM/EDR-style implementation pattern, covers the GPAI Safety and Security chapter as a vendor risk input, and shows how GLACIS produces the Article 12 evidence trail with zero sensitive-data egress runtime instrumentation.

In this guide

Why the Act matters for CISOs

The EU AI Act isn’t just another compliance checkbox for your legal team. It fundamentally changes how organizations must approach AI security—€”and places CISOs at the center of compliance.

Unlike GDPR, which focuses on data protection policies and procedures, the AI Act demands demonstrable technical controls. Article 15 explicitly requires "appropriate levels of accuracy, robustness and cybersecurity" for high-risk AI systems, including resilience against AI-specific attacks like data poisoning and model evasion. This is security engineering, not policy writing.

The shift from documentation to evidence

Traditional compliance follows a familiar pattern: write policies, conduct annual audits, produce documentation. The AI Act breaks this model. Article 74 gives market surveillance authorities power to demand:

  • Access to training, validation, and testing datasets—not summaries, actual data
  • Source code and algorithms—protected as confidential but accessible to authorities
  • Logs demonstrating traceability—per Article 12, throughout the system lifecycle
  • Evidence that controls execute correctly—€”not just that they’re documented

This means CISOs need infrastructure that produces evidence on demand—tamper-evident logs, cryptographic attestations, and audit trails that prove controls actually work.

Key CISO responsibilities under the Act

The regulation assigns several technical domains squarely within CISO purview. Understanding these responsibilities is essential for resource planning and stakeholder communication.

1. Technical control implementation (Articles 9, 14, 15)

Article 9: Risk Management System

CISOs must implement continuous, iterative risk management including:

  • Identification and analysis of known and foreseeable security risks
  • Estimation and evaluation of risks that may emerge during deployment
  • Evaluation of risks based on post-market monitoring data
  • Adoption and documentation of suitable risk mitigation measures

Article 14: Human Oversight

Security implications of human oversight requirements:

  • Access controls ensuring authorized personnel can override AI decisions
  • Audit trails of human interventions and override decisions
  • Authentication mechanisms for human oversight functions
  • Secure channels for escalation and intervention

Article 15: Cybersecurity Requirements

AI-specific security controls beyond traditional IT security:

  • Data poisoning protection: Integrity verification for training data pipelines
  • Model evasion defense: Robustness testing against adversarial inputs
  • Model extraction prevention: API rate limiting and query monitoring
  • Model weight protection: Encryption and access controls for model files

2. Logging and audit-trail infrastructure (Article 12)

Article 12 establishes requirements that directly impact CISO infrastructure decisions. This is perhaps the most operationally demanding requirement for security teams.

Article 12 Requirements: What CISOs Must Implement

  • Automatic logging capabilities ensuring traceability throughout the AI system lifecycle
  • Logging level appropriate to intended purpose—more critical systems require more granular logging
  • Records including: input data period, reference database, persons involved in verification
  • Security measures: Logs must be protected against tampering and unauthorized access
  • Retention periods: Appropriate to the system’s purpose, typically minimum 6 months for biometric systems

3. Security testing requirements

The AI Act mandates testing that goes beyond traditional penetration testing. CISOs must establish programs covering:

  • Adversarial testing: Systematic evaluation of model behavior under attack scenarios
  • Robustness testing: Verification that systems perform correctly with noisy, incomplete, or edge-case inputs
  • Red team exercises: For systemic-risk AI systems, formal adversarial evaluation per Article 55
  • Bias and fairness testing: Security implications of discriminatory outputs

4. Incident-response obligations

Article 73 creates new incident response requirements specific to AI systems. CISOs must integrate these with existing security incident management:

Serious Incident Reporting (15-Day Deadline)

A "serious incident" under Article 73 includes any incident leading to:

  • Death or serious damage to health
  • Serious and irreversible disruption of critical infrastructure
  • Serious disruption of fundamental rights
  • Serious damage to property or the environment

CISOs must establish classification criteria and reporting procedures before the August 2026 deadline.

Questions CISOs should be asking

Before regulators ask these questions, CISOs should be asking them internally. Use this framework to assess your organization’s AI compliance readiness:

AI System Inventory

  • ? Do we have a complete inventory of all AI systems in production?
  • ? Which systems fall under high-risk classification per Annex III?
  • ? Are there shadow AI deployments outside IT governance?

Logging & Evidence

  • ? Can we produce logs demonstrating AI system behavior on demand?
  • ? Are our logs tamper-evident and protected against unauthorized access?
  • ? Do we log human oversight interventions and override decisions?

Security Controls

  • ? Have we tested our AI systems against adversarial attacks?
  • ? Do we have controls preventing data poisoning in training pipelines?
  • ? Are model weights and training data protected with appropriate access controls?

Incident Response

  • ? Do we have AI-specific incident classification criteria?
  • ? Can we meet the 15-day serious incident reporting deadline?
  • ? Have we established communication channels with national competent authorities?

Red flags indicating compliance gaps

These warning signs suggest your organization may not be ready for the August 2026 deadline:

No AI system inventory exists

If you don’t know what AI systems you have, you can—€™t classify them or implement controls. Shadow AI is a critical blind spot.

Logging is application-level only

Article 12 requires AI-specific logging including inputs, outputs, and decision traces—not just HTTP request logs.

Security testing excludes AI-specific threats

Penetration tests that don’t cover adversarial ML, data poisoning, or model extraction leave critical gaps.

AI governance is Legal’s responsibility alone

The AI Act requires technical controls that Legal can’t implement. Without CISO involvement, compliance is policy-only.

No budget allocated for AI compliance infrastructure

Article 12 logging infrastructure and Article 15 security controls require investment. Unfunded mandates don’t get implemented.

Personal-liability considerations for CISOs

While the EU AI Act primarily targets organizations with fines up to €35 million or 7% of global revenue, CISOs face personal liability exposure through several mechanisms:

Director and officer liability

In many EU member states, executives can be held personally liable for regulatory failures where they failed to implement adequate controls or ignored known risks. The AI Act’s explicit technical requirements (Articles 9, 12, 14, 15) create a clear standard of care.

Criminal liability

Some member states may implement criminal penalties for gross negligence in AI system oversight, particularly where serious incidents cause death or serious harm. CISOs should understand their jurisdiction’s implementation of the AI Act.

Professional negligence

Failure to implement reasonable security controls for AI systems could expose CISOs to professional negligence claims, particularly if they were aware of risks and failed to act.

CISO Liability Mitigation Strategies

  • Document recommendations: Create written records of security recommendations, especially when budget or timeline constraints prevent implementation
  • Ensure board reporting: Regular reports on AI risk posture and compliance status create evidence of executive awareness
  • Review D&O insurance: Confirm coverage includes AI-related regulatory penalties and doesn’t exclude "regulatory compliance failures"
  • Establish governance structure: Formal AI governance committee with documented decision authority

Working with other stakeholders

EU AI Act compliance requires coordination across multiple functions. CISOs must establish effective working relationships with:

General Counsel (GC)

  • AI system classification and risk determination
  • Contract requirements for AI vendors
  • Incident reporting protocols and legal privilege
  • Authority information request responses

Chief Compliance Officer (CCO)

  • Quality management system integration
  • Conformity assessment preparation
  • Post-market monitoring coordination
  • Regulatory relationship management

Chief Technology Officer (CTO)

  • Technical documentation requirements
  • Logging infrastructure implementation
  • AI system architecture and security design
  • Adversarial testing program development

Board of Directors

  • AI risk appetite and tolerance definitions
  • Compliance investment authorization
  • Quarterly compliance status reporting
  • Material risk escalation decisions

Board and executive reporting requirements

CISOs should establish regular AI compliance reporting to the board. Recommended metrics and reporting elements:

Quarterly Board Report: AI Compliance Status

1. AI System Inventory Status

Total systems, classification by risk level, new systems added, systems retired

2. High-Risk System Compliance Progress

Percentage meeting Article 9-15 requirements, gap closure timeline, conformity assessment status

3. Technical Control Metrics

Logging coverage percentage, security testing completion, human oversight audit results

4. Incident Summary

AI-related incidents, near-misses, serious incident reports filed (if any)

5. Regulatory Engagement

Authority requests received, inspections, guidance documents reviewed

6. Material Risks and Recommendations

Identified compliance gaps, resource requirements, timeline risks

Implementation checklist for CISOs

Use this checklist to track your organization’s progress toward EU AI Act compliance:

CISO Compliance Checklist

EU AI Act Technical Requirements

Phase 1: Discovery (Month 1-2)

  • Complete AI system inventory across all business units
  • Classify systems per Annex III high-risk categories
  • Identify shadow AI and unsanctioned deployments
  • Assess current logging capabilities against Article 12
  • Document existing security controls for AI systems

Phase 2: Infrastructure (Month 2-5)

  • Implement Article 12-compliant logging infrastructure
  • Deploy tamper-evident log storage and retention
  • Establish human oversight audit trail mechanisms
  • Implement AI-specific security controls per Article 15
  • Deploy training data integrity verification

Phase 3: Testing & Validation (Month 4-7)

  • Establish adversarial testing program
  • Conduct robustness testing for high-risk systems
  • Validate logging completeness and accuracy
  • Test incident response procedures
  • Document testing results per Annex IV

Phase 4: Governance (Month 5-8)

  • Establish AI incident classification criteria
  • Create serious incident reporting procedures
  • Implement board reporting framework
  • Establish authority communication channels
  • Document CISO recommendations and board responses

Timeline note: This 8-month timeline assumes dedicated resources and parallel workstreams. Organizations starting after April 2026 face significant deadline risk.

How GLACIS helps CISOs meet technical requirements

GLACIS provides the evidence infrastructure CISOs need to demonstrate EU AI Act compliance:

Article 12 Logging Infrastructure

Tamper-evident logging that captures AI system inputs, outputs, and decision traces. Cryptographic verification ensures logs haven’t been modified—€”meeting the "appropriate security measures" requirement.

Continuous Control Attestation

Automated verification that security controls execute correctly—not just that policies exist. Generate cryptographic evidence on demand for regulators, auditors, and enterprise customers.

Board-Ready Compliance Reporting

Pre-built dashboards and reports mapped to EU AI Act articles. Demonstrate compliance progress to executives and board with metrics that matter.

Evidence Pack Sprint

Generate audit-ready compliance evidence in days, not months. Includes technical documentation, control attestations, and risk assessment artifacts mapped to Articles 9-15.

Frequently asked questions

What are the CISO’s specific responsibilities under the EU AI Act?

CISOs are responsible for implementing technical controls under Articles 9, 14, and 15 (risk management, human oversight, cybersecurity), establishing logging and audit trail infrastructure per Article 12, conducting security testing including adversarial testing for AI systems, managing incident response and reporting obligations under Article 73, and providing evidence of control effectiveness to regulators and auditors.

What logging requirements does Article 12 impose?

Article 12 requires high-risk AI systems to have automatic logging capabilities that ensure traceability throughout the system lifecycle. Logs must record the period of use, reference databases, input data, and persons involved in verification. Logs must be retained for periods appropriate to the system’s purpose, protected by appropriate security measures, and be tamper-evident. Standard application logs are insufficient—€”AI-specific decision traces are required.

Can CISOs face personal liability under the EU AI Act?

While the EU AI Act primarily targets organizations with fines up to €35 million or 7% of global revenue, personal liability can arise through director and officer liability under national laws, criminal liability in certain member states for gross negligence, professional negligence claims, and D&O insurance exclusions for regulatory non-compliance. CISOs should document their recommendations and ensure adequate board reporting to mitigate personal exposure.

How does the EU AI Act define cybersecurity requirements?

Article 15 requires high-risk AI systems to achieve appropriate levels of cybersecurity, including resilience against attempts to alter use, behavior, or performance through exploitation of vulnerabilities. This specifically includes technical solutions to address AI-specific vulnerabilities such as data poisoning, model evasion, adversarial attacks, and model extraction. Systems must also protect against unauthorized access to training data and model weights.

What is the serious incident reporting deadline?

Article 73 requires reporting serious incidents to national competent authorities within 15 days. A serious incident is any incident leading to death, serious health damage, serious disruption of fundamental rights, or serious property or environmental damage. CISOs must establish incident classification procedures before the August 2026 deadline and maintain communication channels with authorities.

How should CISOs coordinate with General Counsel?

CISOs should work with General Counsel on AI system classification and risk determination, contract requirements for AI vendors and deployers, incident reporting protocols and legal privilege considerations, documentation standards for regulatory defensibility, and coordinated responses to authority information requests under Article 74. Establish regular touchpoints and joint governance structures for effective collaboration.

Make the receipts

Article 12 evidence on demand, not at audit-time scramble.

The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts and tamper-evident decision logs from runtime — Article 12 logging, Article 14 oversight traces, Article 15 robustness evidence. Runtime controls run inside your infrastructure with zero sensitive-data egress. 10 business days, one named workflow, signed evidence pack on day ten.

Book the Agent Runtime Security Sprint See a sample evidence pack →

Related guides

EU AI Act series hubArticles, penalty structure, GLACIS coverage map.
Full compliance guideRisk categories, Articles 9–15 in detail, GPAI, conformity assessment.
For CCOsArticles 9, 11, 17, 26 framed for the compliance lead.
For General CounselLiability allocation, vendor and deployer contracts, extraterritorial scope.
ISO 42001 guideAI management-system standard.
NIST AI RMFRisk-management framework crosswalk.