GLACIS·Ambient AI scribes·EU AI Act·Updated April 24, 2026

Is your ambient scribe high-risk under the EU AI Act?

Annex III, Article 6, the Medical Device Regulation, the Sharp HealthCare class action, and the Article 12 logging gap. A classification guide written for the teams who already shipped a scribe and now need to know where they stand.

By Joe Braidwood·12 min read·Last reviewed Apr 24, 2026
Nov 26, 2025
Sharp HealthCare class action filed (San Diego Superior Court)
Aug 2, 2026
EU AI Act high-risk obligations applicable date (subject to Digital Omnibus delay)
2027–2028
Proposed delayed obligations under Digital Omnibus on AI
Apr 8, 2026
HHS OCR Risk Management video — ongoing risk action required
Quick answer · It depends

Pure documentation scribes that only transcribe and summarize clinical conversations are generally not high-risk. Ambient AI systems become high-risk when they influence clinical decisions, qualify as medical devices under MDR, integrate with clinical decision support, or perform automatic coding that affects care pathways.

Sharp HealthCare · Class action update Apr 2026

The class action filed in San Diego Superior Court on Nov 26, 2025 (Saucedo v. Sharp HealthCare) names Sharp Rees-Stealy, SharpCare, and Sharp Community Medical Group as defendants and identifies Abridge as the third-party vendor. Plaintiffs’ counsel estimates ~100,000 patient encounters were recorded since Abridge rollout. Allegations: California Invasion of Privacy Act (CIPA) all-party-consent violations, Confidentiality of Medical Information Act (CMIA) violations, and fabricated consent records — patient charts say the patient "was advised" and "consented" when, the complaint alleges, no advisement was given. As of April 2026 the case is in early pleading stage.[1]

Annex iii category analysis

The EU AI Act classifies AI systems as high-risk through two pathways: (1) AI systems that are safety components of products requiring third-party conformity assessment under existing EU harmonization legislation (Article 6(1)), or (2) AI systems listed in Annex III covering specific use cases (Article 6(2)).

For ambient AI scribes, two Annex III categories require careful analysis:

Annex iii, category 5(a): healthcare safety components

"AI systems intended to be used as safety components in the management and operation of... healthcare."

This category captures AI systems that, while not necessarily medical devices themselves, serve as safety-critical components in healthcare operations. The question is whether an ambient scribe’s documentation function constitutes a "safety component" in healthcare management.

Article 6(1): medical device pathway

AI systems that are safety components of products covered by EU harmonization legislation listed in Annex I—including Medical Device Regulation (EU) 2017/745.

If an ambient scribe qualifies as a medical device or accessory under MDR, it automatically falls under high-risk via Article 6(1), regardless of Annex III analysis.

The medical device question

Under MDR Article 2(1), a medical device is any instrument, apparatus, appliance, software, or other article intended for diagnosis, prevention, monitoring, treatment, or alleviation of disease. The critical question for ambient scribes: does documenting a clinical conversation constitute diagnosis or treatment support?

Pure transcription and summarization—without clinical interpretation—generally falls outside MDR scope. However, the boundary becomes unclear when the scribe:

  • Extracts and highlights clinical findings from conversation
  • Structures notes according to clinical templates that imply diagnostic categories
  • Integrates with EHR systems in ways that trigger clinical alerts or workflows

Key determining factors

Classification hinges on the answer to one central question: Does the AI system’s output materially influence clinical decisions?

Classification decision matrix

Feature Classification Impact Rationale
Pure transcription Not high-risk No clinical interpretation; human review required
Note summarization Likely not high-risk Documentation aid; physician verifies content
Suggested ICD-10 codes Borderline May influence treatment pathways and billing
Diagnosis suggestions High-risk Direct clinical decision influence
Treatment recommendations High-risk Patient safety implications
Risk scoring/alerts High-risk Safety component in care management
CDS integration High-risk Contributes to decision support system

When ambient scribes are high-risk

An ambient AI scribe crosses into high-risk territory in these scenarios:

Scenario 1: clinical decision support features

The scribe doesn’t just document—it analyzes. If the system suggests differential diagnoses based on symptoms mentioned in conversation, flags potential drug interactions, or recommends follow-up tests, it’s functioning as clinical decision support. This triggers high-risk classification under Annex III Category 5(a).

Scenario 2: automatic coding that influences care

When automatic ICD-10 or CPT coding isn’t merely administrative but feeds into clinical pathways—triggering care protocols, alerting about chronic disease management, or affecting treatment authorization—the system becomes a safety component.

Scenario 3: MDR medical device classification

If national competent authorities or notified bodies determine the scribe qualifies as a medical device under MDR (Class I with clinical function, Class IIa, or higher), Article 6(1) applies automatically. Some ambient scribes that extract vital signs, calculate clinical scores, or generate structured clinical data may fall into this category.

Scenario 4: integration with high-risk systems

Even a simple transcription scribe becomes high-risk if its output feeds directly into a high-risk clinical decision support system. The AI system’s classification considers its role within the broader system architecture.

When ambient scribes are not high-risk

Pure documentation tools that meet the following criteria generally remain outside high-risk classification:

Characteristics of non-high-risk ambient scribes

  • Transcription only: Converts speech to text without clinical interpretation
  • Human review mandatory: Physician must review and approve before documentation is finalized
  • No clinical suggestions: System doesn’t propose diagnoses, treatments, or risk assessments
  • Administrative function: Output serves documentation purposes, not clinical workflow triggers
  • Not MDR-classified: Doesn’t meet medical device definition under EU 2017/745

Edge cases and ambiguities

Several ambient scribe features occupy a regulatory gray zone:

Problem list updates

If the scribe automatically updates the patient’s problem list based on conversation content, is this clinical decision support? Regulators may view this differently depending on whether the update requires physician approval or happens automatically.

Medication reconciliation

Scribes that identify medications mentioned in conversation and cross-reference with the medication list straddle the line. If the system simply flags discrepancies for human review, it’s likely administrative. If it triggers automatic alerts or modifies records, classification becomes uncertain.

Quality measure extraction

Extracting data for quality reporting (HEDIS, MIPS) from clinical conversations could be viewed as purely administrative—or as influencing care by highlighting gaps in quality measure compliance.

Regulatory guidance · April 2026

The European AI Office is expected to issue additional sectoral guidance on healthcare classification through 2026. The Digital Omnibus on AI is in trilogue, with proposed conditional delays of high-risk obligations into 2027 or 2028 tied to availability of harmonized standards. Until the picture clears, document your classification rationale thoroughly and treat voluntary compliance with high-risk requirements as the safe-harbour position for borderline scribes.

Requirements if classified high-risk

High-risk AI systems under the EU AI Act must comply with Articles 9-15, establishing comprehensive obligations:

Article 9: risk management system

Establish, implement, document, and maintain a risk management system throughout the AI system’s lifecycle. Identify and analyze known and foreseeable risks, estimate and evaluate risks, adopt appropriate risk management measures.

Article 10: data and data governance

Training, validation, and testing datasets must be relevant, representative, and free of errors. Data governance practices must address data collection, preparation, and documentation.

Article 11: technical documentation

Comprehensive documentation demonstrating compliance, including system description, design specifications, development process, monitoring, and post-market activities.

Article 13: transparency

Provide clear instructions for use, including intended purpose, level of accuracy, known limitations, and human oversight requirements.

Article 14: human oversight

Design systems to enable effective oversight by natural persons. Include functionality allowing operators to understand capabilities, interpret outputs, and override or reverse the system.

Article 15: accuracy, robustness, cybersecurity

Achieve appropriate levels of accuracy and robustness. Implement cybersecurity measures proportionate to risks.

Article 12: logging implications

Article 12 is particularly relevant for ambient AI scribes and represents a core area of GLACIS expertise. High-risk systems must be designed to automatically record events ("logs") throughout their operation.

What must be logged

For ambient scribes classified as high-risk, Article 12 logging requirements would include:

  • Session identification: Each recording session with unique identifiers
  • Timestamps: Start and end times for recording, processing, and output generation
  • User identification: Which clinician initiated and reviewed the session
  • Input characteristics: Audio duration, quality metrics, interruptions
  • Model information: Version, configuration, and any runtime parameters
  • Processing events: Transcription, summarization, any clinical extraction steps
  • Output details: Generated note length, sections, any suggested codes or flags
  • Human review actions: Edits, approvals, rejections, time spent reviewing
  • Error states: Any failures, retries, or degraded functionality

GLACIS and Article 12 compliance

Article 12 logging creates the evidence foundation for proving AI system compliance. GLACIS specializes in transforming these operational logs into cryptographically-attested compliance evidence—demonstrating not just that controls exist, but that they actually work in production.

Learn about Continuous Attestation

Retention requirements

Logs must be kept for a period appropriate to the intended purpose of the high-risk AI system and applicable legal obligations. For healthcare AI, this typically means aligning with medical record retention requirements — often 7+ years in EU member states.

Sharp HealthCare lawsuit status (April 2026)

The Sharp HealthCare class action remains the bellwether case for ambient-scribe consent and CIPA exposure as of April 2026. Filed Nov 26, 2025 in San Diego Superior Court (Saucedo v. Sharp HealthCare), it names Sharp Rees-Stealy, SharpCare, and Sharp Community Medical Group, and identifies Abridge in court filings as the third-party vendor.

The complaint alleges three categories of harm: California Invasion of Privacy Act (CIPA, Cal. Penal Code §§ 631, 632) all-party-consent violations, Confidentiality of Medical Information Act (CMIA) violations, and — most damaging from a documentation perspective — fabricated consent records. Visit notes are alleged to read that the patient "was advised" and "consented" to recording when no such advisement was given. Plaintiffs’ counsel estimates ~100,000 patient encounters were captured during the rollout window. The case is in early pleading stage.

Practical implication for any healthcare team running an ambient scribe: the discoverable record must distinguish, by cryptographic evidence rather than chart language, between actual consent and AI-generated chart text. The Texas AG’s 2024 settlement with Pieces Technologies — over inflated hallucination-rate claims — is the policy precedent state attorneys general will reach for if accuracy claims fail to hold up in litigation.[2]

US regulatory comparison

Understanding how EU AI Act classification differs from US regulation helps vendors operating in both markets:

EU vs. US regulatory comparison

Aspect EU AI Act US (FDA/HIPAA/State)
Pure documentation scribes Generally not high-risk Not FDA-regulated as medical device
Clinical decision features High-risk under Annex III May require FDA clearance as CDS
Recording consent GDPR consent + AI Act transparency HIPAA + state two-party consent laws
Logging requirements Article 12 mandates automatic logging No specific AI logging mandate
Maximum penalties €15M or 3% global revenue Varies by violation type

Key difference: The EU AI Act’s Annex III can capture AI systems that the FDA wouldn’t regulate. A scribe that extracts clinical data and influences care pathways may be unregulated in the US but high-risk in the EU. For more on US-specific privacy requirements, see our Ambient AI Scribe Privacy Compliance Guide, covering the Sharp lawsuit, CIPA liability, and consent requirements.

Implementation checklist

Use this checklist to assess your ambient AI scribe’s classification and compliance status:

Classification assessment

  • Document all system features and intended purposes
  • Assess each feature against Annex III categories
  • Evaluate whether system qualifies as MDR medical device
  • Map output usage in clinical workflows
  • Document classification rationale with legal review

If classified high-risk

  • Establish risk management system (Article 9)
  • Implement data governance procedures (Article 10)
  • Create comprehensive technical documentation (Article 11)
  • Build automatic logging infrastructure (Article 12)
  • Ensure transparency and instructions for use (Article 13)
  • Design human oversight mechanisms (Article 14)
  • Validate accuracy and implement cybersecurity (Article 15)
  • Conduct conformity assessment (self or third-party)
  • Register in EU database for high-risk AI systems

Evidence requirements for regulators

  • Classification analysis documentation
  • Risk assessment and mitigation records
  • Training data documentation and validation results
  • Logging infrastructure audit trail
  • Human oversight design specifications
  • Accuracy testing and performance monitoring data

Frequently asked questions

Is an ambient AI scribe high-risk under the EU AI act?

It depends on the system’s function. Pure documentation scribes that only transcribe and summarize conversations are generally NOT high-risk. However, ambient AI systems become high-risk if they influence clinical decisions by suggesting diagnoses, treatments, or risk scores; qualify as medical devices under MDR; or integrate with clinical decision support systems. The key test is whether the AI’s output materially influences patient care decisions.

What annex iii category applies to ambient AI scribes?

Ambient AI scribes may fall under Annex III Category 5(a)—AI intended to be used as safety components in the management and operation of healthcare—or under Article 6(1) as medical devices requiring conformity assessment. The relevant category depends on whether the scribe functions as documentation only or influences clinical decisions.

Does article 12 logging apply to ambient AI scribes?

If classified as high-risk, Article 12 requires automatic logging of events throughout the system’s lifetime. For ambient scribes, this means logging each recording session with timestamps, user identification, input audio characteristics, model version and configuration, generated outputs, any error states, and human review actions. Logs must be retained for the period appropriate to the intended purpose.

Are abridge, nuance dax, and suki high-risk under EU AI act?

These systems require individual assessment. Core transcription features are likely not high-risk. However, features like automatic coding suggestions, clinical decision support integrations, or risk scoring would trigger high-risk classification. Vendors must evaluate each feature independently against Annex III criteria.

What happens if i misclassify my ambient AI scribe?

Misclassification creates significant liability. If you classify as non-high-risk but regulators determine the system is high-risk, you face penalties up to €15 million or 3% of global annual turnover. You would also need to immediately halt deployment until conformity requirements are met, including risk management systems, technical documentation, and potentially third-party conformity assessment.

How does EU AI act classification differ from FDA regulation?

The FDA generally does not regulate pure documentation scribes as medical devices because they don’t provide clinical decision support. The EU AI Act takes a broader approach—even if not an MDR medical device, an AI system can still be high-risk under Annex III if it’s a safety component in healthcare. This means some ambient scribes may be unregulated in the US but high-risk in the EU.

When must ambient AI scribe vendors comply with high-risk requirements?

The current EU AI Act applicability date for high-risk Annex III obligations is August 2, 2026; for AI systems regulated under MDR, August 2, 2027. The Digital Omnibus on AI in trilogue as of April 2026 proposes conditional delays into 2027 or 2028 tied to harmonized-standards availability. Even so, the safest plan is to begin classification and conformity work now — technical documentation, risk management, quality management, and Article 12 logging cannot be assembled in the months between final text adoption and any new applicability date.

Key takeaways

  • Classification depends on function — pure transcription is not high-risk; clinical decision influence triggers high-risk
  • Assess each feature independently — a scribe with one high-risk feature becomes a high-risk system
  • Article 12 logging is essential — high-risk systems must log all operational events automatically
  • EU scope is broader than FDA — systems unregulated in the US may be high-risk in the EU
  • Document classification rationale — regulators are expected to scrutinize your analysis
  • August 2026 deadline — begin compliance work now; infrastructure build requires 6-12 months

For more on ambient AI scribe compliance, explore our related resources:

Article 12 logging · Evidence pack · Runtime controls

You shipped a scribe. Make the evidence trail it actually needs.

Whether your scribe is high-risk or not, the Sharp HealthCare class action made one thing clear: the question on the table is no longer “did your AI vendor have a policy” but “can you prove it executed”. The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts that answer it — runtime controls run inside your infrastructure with zero sensitive-data egress.

Book the Agent Runtime Security Sprint See a sample evidence pack

Related guides

Privacy
Ambient AI scribe privacy
Sharp HealthCare lawsuit, CIPA liability, consent requirements.
Regulation
EU AI Act guide
Risk categories, timelines, conformity assessment.
Healthcare
HIPAA-compliant AI
BAAs, PHI, Security Rule, vendor evaluation.