GLACIS·EU AI Act series·DE Germany·Updated April 2026

The EU AI Act in Germany: BNetzA, KoKIVO and the August 2026 deadline.

BNetzA is now confirmed as Germany’s main market surveillance authority under the KI-MIG draft, with the KoKIVO coordination centre planned. BaFin keeps finance-sector AI; BfDI publishes AI / GDPR guidance but is explicitly not designated. This page is the General Counsel, CCO, CISO and DPO view of how those pieces fit together in April 2026 — and where the August 2026 deadline sits under the Digital Omnibus on AI.

Book the Agent Runtime Security Sprint Read the full EU AI Act guide →
General Counsel CCO CISO DPO
Feb 2025
Prohibited practices in force
Jul 2025
BNetzA AI Service Desk operational
Aug 2025
GPAI obligations live; KI-MIG legislative process active
Aug 2026
High-risk obligations scheduled; under Omnibus review
What changed in April 2026

BNetzA is now formally designated as Germany’s main market surveillance authority for the EU AI Act under the KI-Marktüberwachungsgesetz (KI-MIG) draft. The KoKIVO coordination centre — Koordinierungsstelle für künstliche Intelligenz und vernetzte Objekte — is planned within BNetzA to align supervision across sector authorities. BaFin retains finance-sector AI supervision; BfDI publishes guidance on AI/GDPR interplay but is not a designated AI Act authority.[1][2][8]

The Digital Omnibus on AI moved into trilogue on 23 March 2026. Both Parliament (IMCO/LIBE) and the Council favour fixed deadlines: 2 December 2027 for stand-alone high-risk systems and 2 August 2028 for systems embedded in regulated products. Until adoption, 2 August 2026 remains the working baseline.

Germany’s Federal Government Agreement (2025–2029) re-confirms the BNetzA-led model and commits to passing KI-MIG. The unscheduled parliamentary elections of early 2025 delayed national legislation; the EU AI Act remains directly applicable.[1]

Executive summary

Germany — the EU’s largest economy and home to major automotive, manufacturing and healthcare sectors — has settled on a multi-authority architecture led by BNetzA. The Federal Network Agency operates the AI Service Desk (live since July 2025), an AI Lab for technical testing, and will host the KoKIVO coordination centre under the KI-MIG draft.[1]

Germany’s national specifics layer onto the EU AI Act: works-council (Betriebsrat) co-determination rights under the Works Constitution Act (BetrVG) for any AI capable of monitoring employee performance; BfArM oversight for medical AI; BaFin for financial services; KBA for vehicle type approval. The KI-MIG also creates an Independent Market Surveillance Chamber (UKIM) for sensitive high-risk areas like law enforcement, migration, justice and democratic processes.[1][2][3]

The practical compliance position in April 2026: keep preparing for 2 August 2026 as the working baseline; assume BNetzA-led supervision; factor 3–6 months of works-council negotiations into deployment timelines; and watch the Omnibus trilogue closely — the new dates may shift to 2 December 2027 / 2 August 2028 before the year is out.

Germany’s implementation status

Germany is implementing the EU AI Act through directly applicable EU regulation plus the national KI-MIG framework. As the EU’s largest economy with significant AI deployment across automotive, manufacturing, healthcare and financial services, Germany’s approach has outsized influence on how the regulation lands in practice.[1][2]

National implementing legislation

The KI-Marktüberwachungsgesetz und Innovationsförderungsgesetz (KI-MIG) — the "AI Market Surveillance and Innovation Promotion Act" — was published in draft in late August 2025 and is being progressed through the new Federal Government’s 2025–2029 programme. KI-MIG establishes:

PillarDetail
BNetzA designationThe Federal Network Agency is the main market surveillance authority for the EU AI Act in Germany.
KoKIVO coordination centreCoordination centre for AI and connected objects (Koordinierungsstelle für künstliche Intelligenz und vernetzte Objekte) planned within BNetzA to align supervision across sector authorities.
Decentralised supervisionExisting sector regulators (BfArM, BaFin, KBA, Länder authorities) retain their domains.
UKIM Independent ChamberIndependent Market Surveillance Chamber (Unabhängige Kammer für die Marktüberwachung) within BNetzA for sensitive high-risk areas.
Regulatory sandboxesArticle 57 sandboxes operated by BNetzA.
AI Service DeskLive since July 2025; first point of contact for businesses deploying AI in Germany.
Direct applicability

The EU AI Act (Regulation 2024/1689) is directly applicable across all member states. German organisations must comply with the substantive obligations regardless of where the KI-MIG sits in the legislative process. The KI-MIG sets enforcement mechanisms; it does not change the underlying obligations.

National competent authority and sector overlay

Article 70 requires each member state to designate at least one national competent authority. Germany’s design layers BNetzA on top of existing sector regulators, with BfDI publishing AI / GDPR guidance but explicitly not designated as an AI Act authority.[1][8]

Bundesnetzagentur

BNetzA — an independent higher federal authority under the Federal Ministry for Economic Affairs — already regulates telecommunications, postal services, electricity, gas and railway markets. Its EU AI Act responsibilities are now:

FunctionDetail
Market surveillance coordinationLead authority for AI Act compliance; coordinates inspections, complaints handling and cross-border enforcement.
KoKIVO coordination centreHosts the planned coordination centre that aligns supervision across sector authorities.
AI Service DeskOperational since July 2025; provides guidance on AI Act compliance, risk classification and documentation.
AI LabTechnical testing facility for evaluating AI systems, conformity assessments and enforcement support.
Regulatory sandboxOperates Article 57 sandboxes for controlled testing under regulatory guidance.

Sector authorities

Germany’s draft maintains a decentralised supervisory structure. Sector regulators retain AI-related market surveillance in their domains:

AuthorityDomainAI Act relevance
BfArMMedical devices, in-vitro diagnosticsMedical AI, diagnostic algorithms, clinical decision support
BaFinFinancial services supervisionCredit scoring, algorithmic trading, insurance underwriting
KBAMotor vehicles and road trafficAutonomous vehicles, ADAS, vehicle type approval
BfDIFederal data protectionPublishes guidance on AI/GDPR interplay; not designated as AI Act authority
State DPAsData protection in the LänderGDPR/AI Act intersection; biometric AI; employee monitoring
Länder authoritiesProduct safetyConsumer AI products, general market surveillance

UKIM — Independent Market Surveillance Chamber

The draft KI-MIG establishes UKIM (Unabhängige Kammer für die Marktüberwachung) within BNetzA to oversee particularly sensitive high-risk AI. UKIM holds exclusive oversight of AI in:

  • Law enforcement — risk assessment, evidence evaluation, crime prediction.
  • Migration and asylum — application processing, document verification.
  • Border control — biometric identification, risk assessment.
  • Justice and democratic processes — judicial-decision support, election-related systems.

UKIM reports annually to the Bundestag on AI deployment in these areas, providing democratic oversight of government AI use.

Implementation timeline and Omnibus framing

The EU AI Act timeline applies uniformly across member states. In April 2026 the picture is dual-framed: the original Act dates remain the working baseline, while the Digital Omnibus on AI proposes new dates that the Council and Parliament are negotiating.[12]

DateMilestoneNotes for Germany
Aug 2024EU AI Act entry into forceDirectly applicable across the EU.
Feb 2025Prohibited practices in forceNo public German enforcement actions confirmed in April 2026.
Jul 2025BNetzA AI Service Desk liveFirst point of contact for businesses deploying AI in Germany.
Aug 2025GPAI obligations liveGPAI Code of Practice signed by ~24 providers; Meta absent, xAI partial.
Aug 2026High-risk obligations — original dateWorking baseline. Continue conformity preparation.
Aug 2026BNetzA regulatory sandboxes operationalArticle 57 sandbox requirement.
Dec 2027High-risk obligations — proposed under OmnibusStand-alone systems if the Digital Omnibus on AI is adopted.
Aug 2028High-risk obligations — proposed under OmnibusSystems embedded in regulated products under Annex I (relevant for automotive, medical devices).
Working baseline

The Digital Omnibus on AI is in trilogue. Until it is adopted, 2 August 2026 is the operative deadline. Build conformity, technical documentation and Article 12 logs against the original date; if the Omnibus shifts to 2 December 2027 / 2 August 2028, the work translates directly to the new dates. Embedded automotive AI in particular benefits from the proposed embedded-product extension.

High-risk AI sectors in Germany

Germany’s industrial structure means certain Annex III categories — and the Annex I product-safety pathway — have outsized relevance.[4][5][6]

Automotive and manufacturing

Germany’s automotive industry — Volkswagen, BMW, Daimler, Bosch — sits at the intersection of two AI Act pathways:

PathwayWhat it covers
Annex I (Article 6(1))AI as a safety component of products requiring third-party conformity assessment — vehicle type approval falls here.
Annex III §2AI managing critical infrastructure including road traffic.

The Type-Approval Framework Regulation (EU 2018/858) acts as lex specialis for vehicle-related AI safety components; AI Act requirements supplement rather than supersede. The VDA’s KI-Absicherung project develops assurance methods for in-vehicle AI. Key applications for compliance attention: autonomous-driving systems (Level 3+); ADAS features (automatic emergency braking, lane keeping); in-cabin monitoring (driver drowsiness, emotion detection); predictive maintenance (generally minimal risk unless safety-critical).[4]

Healthcare and medical devices

Germany’s healthcare sector and medical-device industry (Siemens Healthineers, Fresenius, B. Braun) face high-risk AI obligations through both the AI Act and the Medical Device Regulation (Regulation 2017/745). BfArM retains supervisory responsibility for AI medical devices. Most clinical decision support, diagnostic AI and treatment-recommendation systems are high-risk and require notified-body conformity assessment (€10,000–€100,000), clinical evaluation, post-market surveillance, and vigilance reporting. The August 2027 extended deadline applies to AI as a medical-device safety component.[6]

Financial services

German financial institutions deploying AI for creditworthiness assessment, insurance underwriting or algorithmic trading face high-risk classification under Annex III §5. BaFin retains supervisory authority; AI Act requirements complement BaFin’s MaRisk minimum requirements for risk management.

Article 12 logging requirements

Article 12 mandates automatic logging across the lifecycle of a high-risk AI system. In Germany the requirement intersects with GDPR (DSGVO), works-council rights under BetrVG, and several sector-specific retention regimes.

Core logging requirements

LayerWhat must be captured
TraceabilityPeriod of each use (start and end date/time); reference database against which input data was checked; input data triggering matches; identity of natural persons involved in verifying results.
Technical envelopeLogging capabilities ensuring traceability across the system lifecycle; logging level appropriate to the system’s intended purpose; tamper-evident protection; retention period appropriate to purpose.

German-specific considerations

LayerDetail
GDPR / DSGVOLogs containing personal data must satisfy purpose limitation, storage limitation and data-subject rights. Reconcile AI Act logging mandates with GDPR minimisation; BfDI’s AI/GDPR guidance applies.[8]
Works council accessUnder §80(2) BetrVG, works councils can request access to AI system logs to verify works-agreement compliance and employee-protection provisions.[3][7]
Sector retentionFinancial services (MaRisk), healthcare (medical records), automotive (product liability) all have retention regimes that must harmonise with Article 12 logging.
Build the evidence trail

Article 12 logging on demand. The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts mapped to BNetzA, BaFin and BfArM expectations — runtime controls run inside your infrastructure with zero sensitive-data egress, with works-council-friendly access controls and per-sector retention defaults.

Book the Agent Runtime Security Sprint

Works councils and sector overlays

Employment and works councils (Betriebsrat)

German employers deploying AI in employment contexts face dual compliance: EU AI Act obligations and national co-determination rights under the Works Constitution Act (BetrVG). Employment AI is explicitly high-risk under Annex III §4 — recruitment and candidate screening; task allocation; promotion decisions; performance monitoring; and termination decisions all qualify.[3][7]

Works council rights under BetrVG

The 2021 Works Council Modernisation Act added AI-specific provisions to BetrVG:

SectionRightPractical implication
§80(3) BetrVGExpert consultationWorks council may engage external AI experts at employer expense.
§87(1) No. 6Co-determination on monitoringVeto power over AI systems capable of monitoring employee behaviour or performance.
§90(1) No. 3Information before introductionEmployer must inform the works council in good time before deploying AI.
§95(2a)Personnel selection guidelinesWorks council involvement in AI-based personnel selection criteria.
Critical planning factor

Factor 3–6 months additional timeline for works-council negotiations when deploying high-risk employment AI. Works agreements (Betriebsvereinbarungen) covering AI use, data handling and employee protections are typically required before deployment. Failure to secure agreement can result in injunctions blocking system use.

Healthcare sector

Healthcare AI must satisfy both AI Act and medical-device regulations. BfArM oversees AI medical devices; the DiGA (Digital Health Applications) directory has its own AI-specific requirements. The August 2027 extended deadline applies to AI as a medical-device safety component, and German healthcare-privacy rules apply on top of GDPR.

Financial services

BaFin-supervised institutions using AI for credit scoring (Annex III high-risk; full conformity assessment), insurance underwriting (high-risk), or algorithmic trading must align AI Act work with BaFin’s MaRisk minimum requirements for risk management and existing model-risk-management practice (SR 11-7-equivalent).

Conformity assessment pathway

German organisations with high-risk AI systems must complete conformity assessment before the August 2026 working baseline. Two pathways apply, depending on classification:

PathwayDetail
Internal control (most high-risk)Provider self-assessment supported by: technical documentation per Annex IV; quality management system (Article 17); post-market monitoring plan; EU declaration of conformity. Typical timeline 3–6 months; cost is internal resourcing.
Notified body assessmentRequired for biometric identification systems, AI medical devices, and products under Annex I requiring third-party conformity (vehicle type approval). Typical timeline 3–12 months; cost €10,000–€100,000.

German notified bodies

German notified bodies for AI Act conformity assessment are being designated. Operators should engage early given limited capacity and extended assessment timelines. Bodies with relevant technical competence include TÜV Süd, TÜV Rheinland, DEKRA, and sector-specific bodies designated under existing EU regulations.

Enforcement and penalties

The EU AI Act penalty structure applies uniformly across Germany; BNetzA and sector authorities are empowered to impose fines. No public German enforcement actions for prohibited practices have been confirmed in April 2026 — authorities are completing the institutional set-up under KI-MIG before bringing actions.

Penalty structure

ViolationMaximum fineEnforcing authority
Prohibited AI practices€35,000,000 or 7% global revenueBNetzA; UKIM for sensitive areas
High-risk non-compliance€15,000,000 or 3% global revenueBNetzA; sector authorities
GPAI obligations€15,000,000 or 3% global revenueEU AI Office (direct)
Incorrect information to authorities€7,500,000 or 1% global revenueBNetzA; sector authorities
Transparency violations€7,500,000 or 1% global revenueBNetzA; sector authorities

Enforcement powers

German authorities have extensive investigatory powers under Article 74: access to conformity documentation and technical data; access to training, validation and testing datasets; access to source code and algorithms (protected as confidential); and the power to require corrective action or market withdrawal.

Compliance roadmap for German organisations

The roadmap below builds against 2 August 2026 as the working baseline. If the Digital Omnibus on AI is adopted, the same artefacts move to 2 December 2027 (stand-alone) or 2 August 2028 (embedded — which captures most automotive AI). Embedded vehicle AI in particular benefits from the proposed extension.

PhaseDetail
01. AI system inventory and classification (Month 1)Catalogue all AI systems. Classify per Annex III risk categories and Annex I product-safety pathways. Identify systems triggering works-council involvement (§87 BetrVG). Map to BfArM, BaFin, KBA where applicable.
02. Works-council engagement (Month 1–4)Inform the works council per §90 BetrVG. Draft Betriebsvereinbarung covering AI use, data handling and employee protections. Allow 3–6 months for negotiation and expert consultation.
03. Risk management and documentation (Month 2–5)Stand up Article 9 risk management. Prepare Annex IV technical documentation. Integrate with ISO 42001 and sector requirements. Document risk mitigation and residual risks.
04. Article 12 logging (Month 3–6)Deploy Article 12 logging infrastructure. Ensure GDPR/DSGVO compliance for logged personal data. Tamper-evident storage with sector-aligned retention. Prepare works-council and regulator access procedures.
05. Conformity assessment (Month 4–8)Internal control or notified-body assessment. Prepare EU declaration of conformity, register in EU AI database (Article 71), affix CE marking. For medical AI, coordinate with BfArM and MDR; for vehicles, coordinate with KBA and the type-approval framework.
06. Post-market monitoring (Ongoing)Article 72 post-market monitoring; Article 73 serious-incident reporting to BNetzA and sector authorities. Periodic reviews; market-surveillance readiness.
Critical timing insight

German organisations face tighter effective timelines because works-council negotiations sit on the critical path. A notified-body assessment starting January 2026 may not complete before the working-baseline August 2026 deadline. Start now and assume Q1 2027 buffer if Omnibus adoption slips.

FAQ

Who is the competent authority for the EU AI Act in Germany?

BNetzA is Germany’s primary market surveillance authority under the KI-MIG draft. BNetzA coordinates AI Act supervision, operates the AI Service Desk and AI Lab, and will host the KoKIVO coordination centre. Sector authorities — BfArM for medical devices, BaFin for financial services, KBA for vehicles — retain responsibility in their domains. UKIM (Independent Market Surveillance Chamber) oversees sensitive high-risk systems in law enforcement, migration, asylum, border control and justice.

What is the KI-Verordnung and when does it apply?

KI-Verordnung is the German term for the EU AI Act (Regulation 2024/1689). Germany is implementing the enforcement architecture through the KI-Marktüberwachungsgesetz und Innovationsförderungsgesetz (KI-MIG). The EU AI Act is directly applicable: prohibited practices since February 2025; GPAI obligations since August 2025; high-risk obligations scheduled for 2 August 2026 (with the Digital Omnibus on AI proposing 2 December 2027 and 2 August 2028).

Do German works councils have rights regarding AI systems?

Yes — extensive ones. Under the Works Constitution Act (BetrVG), employers must inform the works council before introducing AI (§90), works councils can consult external AI experts at employer expense (§80), they hold co-determination rights over systems that could monitor employees (§87), and they must be involved in AI-based personnel selection guidelines (§95). These rights apply in addition to EU AI Act deployer obligations and typically require negotiated works agreements before deployment.

How does the EU AI Act affect German automotive companies?

Most AI in autonomous vehicles and ADAS is high-risk when used as a safety component. Vehicle-related AI safety components are primarily regulated through the Type-Approval Framework Regulation (EU 2018/858), with AI Act requirements supplementary. German automakers must complete conformity assessment by the working baseline of August 2026; the proposed Omnibus extension to 2 August 2028 for embedded products would help here. The VDA’s KI-Absicherung project develops in-vehicle AI assurance methods.

What are the penalties in Germany?

Penalties mirror the EU ceilings: up to €35M or 7% of global turnover for prohibited practices; €15M or 3% for high-risk non-compliance and GPAI obligations; €7.5M or 1% for incorrect information or transparency violations. BNetzA and sector authorities enforce; UKIM oversees the sensitive areas.

What is Article 12 logging and why does it matter in Germany?

Article 12 requires high-risk AI systems to log events automatically across the lifecycle. In Germany this intersects with GDPR/DSGVO, works-council information rights under §80 BetrVG, and sector retention rules (MaRisk for finance, healthcare records, automotive product liability). Logs must be tamper-evident, retained appropriately and available to BNetzA on request.

Are there AI regulatory sandboxes in Germany?

Yes. Article 57 requires every member state to operate at least one AI regulatory sandbox by August 2026. The KI-MIG draft assigns sandbox operation to BNetzA. The sandbox provides a controlled environment for development and testing under regulatory supervision before full market launch.

References

  1. Technology’s Legal Edge. "State of the Act: EU AI Act implementation in key Member States." Updated 2025–2026. technologyslegaledge.com
  2. Pinsent Masons. "AI Act: Germany consults on implementation law." 2025. pinsentmasons.com
  3. Hogan Lovells. "AI in German Employment — Navigating the AI Act, GDPR, and National Legislation." 2024. hoganlovells.com
  4. VDA. "Position: AI Act." 2023. vda.de
  5. Taylor Wessing. "AI Act and the Automotive Industry — where does the road lead?" March 2025. taylorwessing.com
  6. European Union. "Regulation (EU) 2024/1689." OJEU, 12 July 2024. EUR-Lex
  7. Bird & Bird. "First Judgement on the Rights of Works Councils when Employees use AI Systems." 2024. twobirds.com
  8. White & Case. "AI Watch: Global Regulatory Tracker — Germany." Updated 2025–2026. whitecase.com
  9. Chambers and Partners. "Artificial Intelligence 2025 — Germany." Practice Guide, 2025. chambers.com
  10. DLA Piper. "German government provides information on its plans for AI and employee protection." 2024. dlapiper.com
  11. Simmons & Simmons. "Germany’s Implementation Act for the EU AI Act." 2025. simmons-simmons.com
  12. European Parliament. "Artificial Intelligence Act: delayed application, ban on nudifier apps." 23 March 2026. europarl.europa.eu

Ready to make the receipts

EU AI Act compliance in days, not months.

The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts that your AI controls execute correctly — mapped to Articles 9–15, ISO 42001, NIST AI RMF, and BNetzA / BaFin / BfArM expectations. Runtime controls run inside your infrastructure with zero sensitive-data egress. Get an audit-ready evidence pack before the August 2026 working baseline (or whatever the Omnibus settles on).

Book the Agent Runtime Security Sprint See a sample evidence pack →

Related guides

Full EU AI Act guideRisk categories, Articles 9–15 in detail, GPAI obligations, conformity-assessment paths, Omnibus status.
EU AI Act in SpainAESIA, the December 2025 guidance pack, the regulatory sandbox, draft national AI law.
EU AI Act in ItalyLaw 132/2025 in force; AgID / ACN / Garante triangle; October 2026 implementing decrees.
ISO 42001 guideAI management system standard.
AI governance toolsMarket analysis and vendor comparison.