FR·EU AI Act series·France implementation·Updated April 2026

The EU AI Act in France: a decentralised model, still settling.

France has chosen to spread EU AI Act competences across existing regulators rather than create a single new authority. CNIL leads on prohibited practices, ANSSI now holds AI-cybersecurity competences, and PEReN supports technical monitoring. The omnibus bill that would formally name every national competent authority is still pending Parliament approval as of April 2026.

Book the Agent Runtime Security Sprint All-EU material in the main guide →
Compliance lead DPO General Counsel CISO
Feb 2025
Prohibited practices in force; CNIL begins active oversight
Aug 2025
GPAI obligations live; France missed the formal authority-designation deadline
Apr 2026
ANSSI tasked with EU AI Act cybersecurity competences
Aug 2026
High-risk obligations scheduled; under Digital Omnibus review
What changed in April 2026 — France

Three things moved this quarter. ANSSI was formally tasked with the EU AI Act cybersecurity competences, anchoring Article 15 robustness and resilience supervision. The multi-authority bill that would name every national competent authority remains pending in Parliament — INESIA continues to operate as a coordination body without a statutory designation. And the Digital Omnibus on AI moved into trilogue on 23 March 2026, with proposed delays to high-risk obligations of 2 December 2027 (stand-alone) and 2 August 2028 (embedded).

Working baseline for French operators is unchanged: prepare for 2 August 2026 as if it will hold. The Article 12 logging and Article 11 technical documentation obligations are not the subject of any proposed delay.

Who supervises what in France

France’s distinctive choice is decentralisation. Existing sector regulators retain their domains; the EU AI Act adds new competences on top. Until the omnibus bill clears Parliament, the table below is provisional in scope but operational in practice — these authorities are already exercising their AI Act roles.

AuthorityMandateEU AI Act role (April 2026)
CNIL Data protection, fundamental rights Prohibited practices in workplaces and education (emotion recognition, biometric categorisation), Article 50 transparency, GDPR–AI Act overlap. Published 2025–2028 strategic plan with AI as priority axis.
ANSSI National cybersecurity agency AI Act cybersecurity competences (formally tasked April 2026): Article 15 robustness, accuracy and cybersecurity supervision, including resilience to adversarial input.
PEReN Pôle d’Expertise de la Régulation Numérique Technical monitoring, model probing, support to other regulators on auditability and post-market surveillance.
DGCCRF Consumer and competition authority Coordination role; commercial-manipulation prohibitions; single point of contact pending statutory designation.
ACPR Banking and insurance prudential supervisor Sector-specific high-risk AI in financial services: credit scoring, insurance underwriting, anti-fraud. Conformity-assessment overlap with prudential rules.
ARCOM Audiovisual and digital communications Information integrity, deepfakes, AI-generated media; Article 50 disclosure interface for media providers.
ANSM, HAS Medicines and Health-Authority bodies Healthcare AI with MDR/IVDR overlap; ambient documentation, clinical decision support, diagnostic AI under dual conformity routes.
Défenseur des droits Independent rights ombudsman Article 77 fundamental-rights body; discrimination monitoring on AI-affected decisions.
INESIA in context

INESIA (Institut National d’Évaluation et de Sécurité de l’Intelligence Artificielle), launched February 2025, coordinates ANSSI, Inria, LNE and PEReN on AI safety. It is not a market-surveillance authority under Article 70 — that role still awaits the multi-authority bill. Treat INESIA as a technical convening body, not a designation.

French sector overlays

The Articles 9–15 obligations themselves are EU-wide. What differs in France is the supervisory stack you face on top of them. The most common combinations:

SectorFrench regulators on top of the AI Act
Healthcare AIANSM (medical-device approval, vigilance), HAS (clinical-evaluation guidance), CNIL (Health Data Hub authorisations), plus the AI Act conformity route. Ambient scribes and clinical decision support typically high-risk under Annex III.
Financial servicesACPR for prudential conformity on credit scoring and insurance underwriting; AMF on market-conduct AI; Banque de France on payment-fraud models. Anti-money-laundering models continue under existing CRR/MiFID frameworks.
Public administrationHeightened CNIL scrutiny on automated decisions affecting citizens; Défenseur des droits as Article 77 fundamental-rights body; Conseil d’État jurisprudence on algorithmic transparency.
Workplace and educationCNIL holds the prohibited-practice line on emotion recognition and biometric categorisation. Ministry of Labour and Ministry of National Education guidance applies on top of the EU AI Act.
Media and contentARCOM on information integrity, deepfakes, generative-AI labelling; coordination with the EU Code of Practice on Disinformation.
Defence and dual-useArticle 2(3) defence carve-out applies; Ministry of Armed Forces ethical principles cover voluntary practice. AI in dual-use industrial systems remains in scope.

Data-protection overlay: Loi Informatique et Libertés

French AI deployments rarely sit on AI Act obligations alone. The Loi Informatique et Libertés (Law 78-17, as amended) and GDPR remain the operative data-protection regime; CNIL enforces both. The most common overlaps:

  • Article 10 (data governance) and GDPR Articles 5–6. Lawful basis for training data, purpose limitation, accuracy.
  • Article 13 (transparency) and GDPR Articles 13–14. Information to data subjects; CNIL’s stricter expectations on automated-decision disclosures.
  • Article 14 (human oversight) and GDPR Article 22. Meaningful human review of consequential decisions.
  • Article 26 (deployer obligations) and CNIL data-protection impact assessments. Single artefact often covers both with care.

CNIL’s published "AI how-to sheets" (fiches IA) remain the most practical bridge between the two regimes for French operators.

Regulatory sandboxes and innovation routes

France has signalled support for AI Act sandboxes through the France 2030 plan. CNIL’s “bac à sable” (sandbox) programme has run thematic cohorts since 2021 and is the closest operational analogue. Healthcare AI deployers can also use Health Data Hub authorisations as a parallel innovation route. A formal AI Act regulatory sandbox under Article 57 is expected once the omnibus bill is adopted.

Articles 9–15, conformity assessment, GPAI, penalties

These obligations apply EU-wide and are not France-specific. To keep this page focused on locality, the in-depth treatment of Articles 9–15, the conformity-assessment workflow, GPAI provider duties, and the Article 99 penalty structure is maintained on the main guide.

↑ For all-EU material, see the main guide

TopicAnchor on the main guide
Articles 9–15 explainerguide-eu-ai-act#articles-9-15
Article 12 logging requirementsguide-eu-ai-act#article-12
Conformity-assessment workflowguide-eu-ai-act#conformity
GPAI obligations and Code of Practiceguide-eu-ai-act#gpai
Article 99 penalty structureguide-eu-ai-act#penalties
Member-state implementation tableguide-eu-ai-act#member-states

References

  1. European Union. Regulation (EU) 2024/1689 (EU AI Act). EUR-Lex 32024R1689.
  2. CNIL. Plan stratégique 2025–2028. cnil.fr.
  3. Direction générale des Entreprises. Les autorités compétentes pour la mise en œuvre du règlement européen sur l’intelligence artificielle. entreprises.gouv.fr.
  4. ANSSI. Tasking on AI Act cybersecurity competences, April 2026.
  5. ai-regulation.com. EU AI Act implementation: France still without designated national competent authorities. ai-regulation.com.
  6. Technology’s Legal Edge. State of the Act: EU AI Act implementation in key Member States, November 2025. technologyslegaledge.com.
  7. European Parliament. AI Act delayed application; ban on nudifier apps, March 2026. europarl.europa.eu.

Build the evidence trail

French operators: Article 12 logs on demand, before the assessor arrives.

The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts and a tamper-evident Article 12 log from your AI’s actual runtime behaviour — runtime controls run inside your infrastructure with zero sensitive-data egress. CNIL inspectors and notified bodies receive verifiable evidence packs in place of written assertions.

Book the Agent Runtime Security Sprint See a sample evidence pack →

10 business days. One named workflow. Signed evidence pack on day ten.