Germany’s implementation status
Germany is implementing the EU AI Act through directly applicable EU regulation plus the national KI-MIG framework. As the EU’s largest economy with significant AI deployment across automotive, manufacturing, healthcare and financial services, Germany’s approach has outsized influence on how the regulation lands in practice.[1][2]
National implementing legislation
The KI-Marktüberwachungsgesetz und Innovationsförderungsgesetz (KI-MIG) — the "AI Market Surveillance and Innovation Promotion Act" — was published in draft in late August 2025 and is being progressed through the new Federal Government’s 2025–2029 programme. KI-MIG establishes:
| Pillar | Detail |
|---|---|
| BNetzA designation | The Federal Network Agency is the main market surveillance authority for the EU AI Act in Germany. |
| KoKIVO coordination centre | Coordination centre for AI and connected objects (Koordinierungsstelle für künstliche Intelligenz und vernetzte Objekte) planned within BNetzA to align supervision across sector authorities. |
| Decentralised supervision | Existing sector regulators (BfArM, BaFin, KBA, Länder authorities) retain their domains. |
| UKIM Independent Chamber | Independent Market Surveillance Chamber (Unabhängige Kammer für die Marktüberwachung) within BNetzA for sensitive high-risk areas. |
| Regulatory sandboxes | Article 57 sandboxes operated by BNetzA. |
| AI Service Desk | Live since July 2025; first point of contact for businesses deploying AI in Germany. |
The EU AI Act (Regulation 2024/1689) is directly applicable across all member states. German organisations must comply with the substantive obligations regardless of where the KI-MIG sits in the legislative process. The KI-MIG sets enforcement mechanisms; it does not change the underlying obligations.
National competent authority and sector overlay
Article 70 requires each member state to designate at least one national competent authority. Germany’s design layers BNetzA on top of existing sector regulators, with BfDI publishing AI / GDPR guidance but explicitly not designated as an AI Act authority.[1][8]
Bundesnetzagentur
BNetzA — an independent higher federal authority under the Federal Ministry for Economic Affairs — already regulates telecommunications, postal services, electricity, gas and railway markets. Its EU AI Act responsibilities are now:
| Function | Detail |
|---|---|
| Market surveillance coordination | Lead authority for AI Act compliance; coordinates inspections, complaints handling and cross-border enforcement. |
| KoKIVO coordination centre | Hosts the planned coordination centre that aligns supervision across sector authorities. |
| AI Service Desk | Operational since July 2025; provides guidance on AI Act compliance, risk classification and documentation. |
| AI Lab | Technical testing facility for evaluating AI systems, conformity assessments and enforcement support. |
| Regulatory sandbox | Operates Article 57 sandboxes for controlled testing under regulatory guidance. |
Sector authorities
Germany’s draft maintains a decentralised supervisory structure. Sector regulators retain AI-related market surveillance in their domains:
| Authority | Domain | AI Act relevance |
|---|---|---|
| BfArM | Medical devices, in-vitro diagnostics | Medical AI, diagnostic algorithms, clinical decision support |
| BaFin | Financial services supervision | Credit scoring, algorithmic trading, insurance underwriting |
| KBA | Motor vehicles and road traffic | Autonomous vehicles, ADAS, vehicle type approval |
| BfDI | Federal data protection | Publishes guidance on AI/GDPR interplay; not designated as AI Act authority |
| State DPAs | Data protection in the Länder | GDPR/AI Act intersection; biometric AI; employee monitoring |
| Länder authorities | Product safety | Consumer AI products, general market surveillance |
UKIM — Independent Market Surveillance Chamber
The draft KI-MIG establishes UKIM (Unabhängige Kammer für die Marktüberwachung) within BNetzA to oversee particularly sensitive high-risk AI. UKIM holds exclusive oversight of AI in:
- Law enforcement — risk assessment, evidence evaluation, crime prediction.
- Migration and asylum — application processing, document verification.
- Border control — biometric identification, risk assessment.
- Justice and democratic processes — judicial-decision support, election-related systems.
UKIM reports annually to the Bundestag on AI deployment in these areas, providing democratic oversight of government AI use.
Implementation timeline and Omnibus framing
The EU AI Act timeline applies uniformly across member states. In April 2026 the picture is dual-framed: the original Act dates remain the working baseline, while the Digital Omnibus on AI proposes new dates that the Council and Parliament are negotiating.[12]
| Date | Milestone | Notes for Germany |
|---|---|---|
| Aug 2024 | EU AI Act entry into force | Directly applicable across the EU. |
| Feb 2025 | Prohibited practices in force | No public German enforcement actions confirmed in April 2026. |
| Jul 2025 | BNetzA AI Service Desk live | First point of contact for businesses deploying AI in Germany. |
| Aug 2025 | GPAI obligations live | GPAI Code of Practice signed by ~24 providers; Meta absent, xAI partial. |
| Aug 2026 | High-risk obligations — original date | Working baseline. Continue conformity preparation. |
| Aug 2026 | BNetzA regulatory sandboxes operational | Article 57 sandbox requirement. |
| Dec 2027 | High-risk obligations — proposed under Omnibus | Stand-alone systems if the Digital Omnibus on AI is adopted. |
| Aug 2028 | High-risk obligations — proposed under Omnibus | Systems embedded in regulated products under Annex I (relevant for automotive, medical devices). |
The Digital Omnibus on AI is in trilogue. Until it is adopted, 2 August 2026 is the operative deadline. Build conformity, technical documentation and Article 12 logs against the original date; if the Omnibus shifts to 2 December 2027 / 2 August 2028, the work translates directly to the new dates. Embedded automotive AI in particular benefits from the proposed embedded-product extension.
High-risk AI sectors in Germany
Germany’s industrial structure means certain Annex III categories — and the Annex I product-safety pathway — have outsized relevance.[4][5][6]
Automotive and manufacturing
Germany’s automotive industry — Volkswagen, BMW, Daimler, Bosch — sits at the intersection of two AI Act pathways:
| Pathway | What it covers |
|---|---|
| Annex I (Article 6(1)) | AI as a safety component of products requiring third-party conformity assessment — vehicle type approval falls here. |
| Annex III §2 | AI managing critical infrastructure including road traffic. |
The Type-Approval Framework Regulation (EU 2018/858) acts as lex specialis for vehicle-related AI safety components; AI Act requirements supplement rather than supersede. The VDA’s KI-Absicherung project develops assurance methods for in-vehicle AI. Key applications for compliance attention: autonomous-driving systems (Level 3+); ADAS features (automatic emergency braking, lane keeping); in-cabin monitoring (driver drowsiness, emotion detection); predictive maintenance (generally minimal risk unless safety-critical).[4]
Healthcare and medical devices
Germany’s healthcare sector and medical-device industry (Siemens Healthineers, Fresenius, B. Braun) face high-risk AI obligations through both the AI Act and the Medical Device Regulation (Regulation 2017/745). BfArM retains supervisory responsibility for AI medical devices. Most clinical decision support, diagnostic AI and treatment-recommendation systems are high-risk and require notified-body conformity assessment (€10,000–€100,000), clinical evaluation, post-market surveillance, and vigilance reporting. The August 2027 extended deadline applies to AI as a medical-device safety component.[6]
Financial services
German financial institutions deploying AI for creditworthiness assessment, insurance underwriting or algorithmic trading face high-risk classification under Annex III §5. BaFin retains supervisory authority; AI Act requirements complement BaFin’s MaRisk minimum requirements for risk management.
Article 12 logging requirements
Article 12 mandates automatic logging across the lifecycle of a high-risk AI system. In Germany the requirement intersects with GDPR (DSGVO), works-council rights under BetrVG, and several sector-specific retention regimes.
Core logging requirements
| Layer | What must be captured |
|---|---|
| Traceability | Period of each use (start and end date/time); reference database against which input data was checked; input data triggering matches; identity of natural persons involved in verifying results. |
| Technical envelope | Logging capabilities ensuring traceability across the system lifecycle; logging level appropriate to the system’s intended purpose; tamper-evident protection; retention period appropriate to purpose. |
German-specific considerations
| Layer | Detail |
|---|---|
| GDPR / DSGVO | Logs containing personal data must satisfy purpose limitation, storage limitation and data-subject rights. Reconcile AI Act logging mandates with GDPR minimisation; BfDI’s AI/GDPR guidance applies.[8] |
| Works council access | Under §80(2) BetrVG, works councils can request access to AI system logs to verify works-agreement compliance and employee-protection provisions.[3][7] |
| Sector retention | Financial services (MaRisk), healthcare (medical records), automotive (product liability) all have retention regimes that must harmonise with Article 12 logging. |
Article 12 logging on demand. The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts mapped to BNetzA, BaFin and BfArM expectations — runtime controls run inside your infrastructure with zero sensitive-data egress, with works-council-friendly access controls and per-sector retention defaults.
Works councils and sector overlays
Employment and works councils (Betriebsrat)
German employers deploying AI in employment contexts face dual compliance: EU AI Act obligations and national co-determination rights under the Works Constitution Act (BetrVG). Employment AI is explicitly high-risk under Annex III §4 — recruitment and candidate screening; task allocation; promotion decisions; performance monitoring; and termination decisions all qualify.[3][7]
Works council rights under BetrVG
The 2021 Works Council Modernisation Act added AI-specific provisions to BetrVG:
| Section | Right | Practical implication |
|---|---|---|
| §80(3) BetrVG | Expert consultation | Works council may engage external AI experts at employer expense. |
| §87(1) No. 6 | Co-determination on monitoring | Veto power over AI systems capable of monitoring employee behaviour or performance. |
| §90(1) No. 3 | Information before introduction | Employer must inform the works council in good time before deploying AI. |
| §95(2a) | Personnel selection guidelines | Works council involvement in AI-based personnel selection criteria. |
Factor 3–6 months additional timeline for works-council negotiations when deploying high-risk employment AI. Works agreements (Betriebsvereinbarungen) covering AI use, data handling and employee protections are typically required before deployment. Failure to secure agreement can result in injunctions blocking system use.
Healthcare sector
Healthcare AI must satisfy both AI Act and medical-device regulations. BfArM oversees AI medical devices; the DiGA (Digital Health Applications) directory has its own AI-specific requirements. The August 2027 extended deadline applies to AI as a medical-device safety component, and German healthcare-privacy rules apply on top of GDPR.
Financial services
BaFin-supervised institutions using AI for credit scoring (Annex III high-risk; full conformity assessment), insurance underwriting (high-risk), or algorithmic trading must align AI Act work with BaFin’s MaRisk minimum requirements for risk management and existing model-risk-management practice (SR 11-7-equivalent).
Conformity assessment pathway
German organisations with high-risk AI systems must complete conformity assessment before the August 2026 working baseline. Two pathways apply, depending on classification:
| Pathway | Detail |
|---|---|
| Internal control (most high-risk) | Provider self-assessment supported by: technical documentation per Annex IV; quality management system (Article 17); post-market monitoring plan; EU declaration of conformity. Typical timeline 3–6 months; cost is internal resourcing. |
| Notified body assessment | Required for biometric identification systems, AI medical devices, and products under Annex I requiring third-party conformity (vehicle type approval). Typical timeline 3–12 months; cost €10,000–€100,000. |
German notified bodies
German notified bodies for AI Act conformity assessment are being designated. Operators should engage early given limited capacity and extended assessment timelines. Bodies with relevant technical competence include TÜV Süd, TÜV Rheinland, DEKRA, and sector-specific bodies designated under existing EU regulations.
Enforcement and penalties
The EU AI Act penalty structure applies uniformly across Germany; BNetzA and sector authorities are empowered to impose fines. No public German enforcement actions for prohibited practices have been confirmed in April 2026 — authorities are completing the institutional set-up under KI-MIG before bringing actions.
Penalty structure
| Violation | Maximum fine | Enforcing authority |
|---|---|---|
| Prohibited AI practices | €35,000,000 or 7% global revenue | BNetzA; UKIM for sensitive areas |
| High-risk non-compliance | €15,000,000 or 3% global revenue | BNetzA; sector authorities |
| GPAI obligations | €15,000,000 or 3% global revenue | EU AI Office (direct) |
| Incorrect information to authorities | €7,500,000 or 1% global revenue | BNetzA; sector authorities |
| Transparency violations | €7,500,000 or 1% global revenue | BNetzA; sector authorities |
Enforcement powers
German authorities have extensive investigatory powers under Article 74: access to conformity documentation and technical data; access to training, validation and testing datasets; access to source code and algorithms (protected as confidential); and the power to require corrective action or market withdrawal.
Compliance roadmap for German organisations
The roadmap below builds against 2 August 2026 as the working baseline. If the Digital Omnibus on AI is adopted, the same artefacts move to 2 December 2027 (stand-alone) or 2 August 2028 (embedded — which captures most automotive AI). Embedded vehicle AI in particular benefits from the proposed extension.
| Phase | Detail |
|---|---|
| 01. AI system inventory and classification (Month 1) | Catalogue all AI systems. Classify per Annex III risk categories and Annex I product-safety pathways. Identify systems triggering works-council involvement (§87 BetrVG). Map to BfArM, BaFin, KBA where applicable. |
| 02. Works-council engagement (Month 1–4) | Inform the works council per §90 BetrVG. Draft Betriebsvereinbarung covering AI use, data handling and employee protections. Allow 3–6 months for negotiation and expert consultation. |
| 03. Risk management and documentation (Month 2–5) | Stand up Article 9 risk management. Prepare Annex IV technical documentation. Integrate with ISO 42001 and sector requirements. Document risk mitigation and residual risks. |
| 04. Article 12 logging (Month 3–6) | Deploy Article 12 logging infrastructure. Ensure GDPR/DSGVO compliance for logged personal data. Tamper-evident storage with sector-aligned retention. Prepare works-council and regulator access procedures. |
| 05. Conformity assessment (Month 4–8) | Internal control or notified-body assessment. Prepare EU declaration of conformity, register in EU AI database (Article 71), affix CE marking. For medical AI, coordinate with BfArM and MDR; for vehicles, coordinate with KBA and the type-approval framework. |
| 06. Post-market monitoring (Ongoing) | Article 72 post-market monitoring; Article 73 serious-incident reporting to BNetzA and sector authorities. Periodic reviews; market-surveillance readiness. |
German organisations face tighter effective timelines because works-council negotiations sit on the critical path. A notified-body assessment starting January 2026 may not complete before the working-baseline August 2026 deadline. Start now and assume Q1 2027 buffer if Omnibus adoption slips.
FAQ
Who is the competent authority for the EU AI Act in Germany?
BNetzA is Germany’s primary market surveillance authority under the KI-MIG draft. BNetzA coordinates AI Act supervision, operates the AI Service Desk and AI Lab, and will host the KoKIVO coordination centre. Sector authorities — BfArM for medical devices, BaFin for financial services, KBA for vehicles — retain responsibility in their domains. UKIM (Independent Market Surveillance Chamber) oversees sensitive high-risk systems in law enforcement, migration, asylum, border control and justice.
What is the KI-Verordnung and when does it apply?
KI-Verordnung is the German term for the EU AI Act (Regulation 2024/1689). Germany is implementing the enforcement architecture through the KI-Marktüberwachungsgesetz und Innovationsförderungsgesetz (KI-MIG). The EU AI Act is directly applicable: prohibited practices since February 2025; GPAI obligations since August 2025; high-risk obligations scheduled for 2 August 2026 (with the Digital Omnibus on AI proposing 2 December 2027 and 2 August 2028).
Do German works councils have rights regarding AI systems?
Yes — extensive ones. Under the Works Constitution Act (BetrVG), employers must inform the works council before introducing AI (§90), works councils can consult external AI experts at employer expense (§80), they hold co-determination rights over systems that could monitor employees (§87), and they must be involved in AI-based personnel selection guidelines (§95). These rights apply in addition to EU AI Act deployer obligations and typically require negotiated works agreements before deployment.
How does the EU AI Act affect German automotive companies?
Most AI in autonomous vehicles and ADAS is high-risk when used as a safety component. Vehicle-related AI safety components are primarily regulated through the Type-Approval Framework Regulation (EU 2018/858), with AI Act requirements supplementary. German automakers must complete conformity assessment by the working baseline of August 2026; the proposed Omnibus extension to 2 August 2028 for embedded products would help here. The VDA’s KI-Absicherung project develops in-vehicle AI assurance methods.
What are the penalties in Germany?
Penalties mirror the EU ceilings: up to €35M or 7% of global turnover for prohibited practices; €15M or 3% for high-risk non-compliance and GPAI obligations; €7.5M or 1% for incorrect information or transparency violations. BNetzA and sector authorities enforce; UKIM oversees the sensitive areas.
What is Article 12 logging and why does it matter in Germany?
Article 12 requires high-risk AI systems to log events automatically across the lifecycle. In Germany this intersects with GDPR/DSGVO, works-council information rights under §80 BetrVG, and sector retention rules (MaRisk for finance, healthcare records, automotive product liability). Logs must be tamper-evident, retained appropriately and available to BNetzA on request.
Are there AI regulatory sandboxes in Germany?
Yes. Article 57 requires every member state to operate at least one AI regulatory sandbox by August 2026. The KI-MIG draft assigns sandbox operation to BNetzA. The sandbox provides a controlled environment for development and testing under regulatory supervision before full market launch.
References
- Technology’s Legal Edge. "State of the Act: EU AI Act implementation in key Member States." Updated 2025–2026. technologyslegaledge.com
- Pinsent Masons. "AI Act: Germany consults on implementation law." 2025. pinsentmasons.com
- Hogan Lovells. "AI in German Employment — Navigating the AI Act, GDPR, and National Legislation." 2024. hoganlovells.com
- VDA. "Position: AI Act." 2023. vda.de
- Taylor Wessing. "AI Act and the Automotive Industry — where does the road lead?" March 2025. taylorwessing.com
- European Union. "Regulation (EU) 2024/1689." OJEU, 12 July 2024. EUR-Lex
- Bird & Bird. "First Judgement on the Rights of Works Councils when Employees use AI Systems." 2024. twobirds.com
- White & Case. "AI Watch: Global Regulatory Tracker — Germany." Updated 2025–2026. whitecase.com
- Chambers and Partners. "Artificial Intelligence 2025 — Germany." Practice Guide, 2025. chambers.com
- DLA Piper. "German government provides information on its plans for AI and employee protection." 2024. dlapiper.com
- Simmons & Simmons. "Germany’s Implementation Act for the EU AI Act." 2025. simmons-simmons.com
- European Parliament. "Artificial Intelligence Act: delayed application, ban on nudifier apps." 23 March 2026. europarl.europa.eu