Home/Insights/REGULATORY
MAR 18 · 2026·REGULATORY·14 min read

EU AI Act High Risk Deadline – Who it concerns and recent developments

40% of enterprise AI systems can't be clearly classified under the EU AI Act. The compliance deadline is in 5 months. Recent developments in the EU AI Act.

EU AI Act High Risk Deadline – Who it concerns and recent developments

EU AI Act: The Deadline Is Set. The Guidance Isn't.

The EU AI Act and the high-risk obligations deadline: missing guidelines regarding high-risk classification, and recent developments following the EU AI Act.

1. What Is the EU AI Act, and What Makes AI "High-Risk"?

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI regulation. It entered into force on August 1, 2024 and applies a risk-based framework: the higher the potential harm an AI system can cause, the stricter the rules [1][2].

The Act classifies AI into four tiers:

Most of the regulatory weight, and most of the compliance burden, falls on high-risk AI systems.

What counts as high-risk

An AI system is classified as high-risk if it falls under Annex III of the Act [5], which covers:

A separate high-risk category under Annex I covers AI systems embedded as safety components in regulated products, such as medical devices, machinery, vehicles, and aviation equipment. This category has its own compliance timeline [1].

The classification problem

In theory, the categories above should make it straightforward to determine whether an AI system is high-risk. In practice, it is not.

A study by appliedAI examining 106 real enterprise AI systems found that only 18% could be clearly classified as high-risk and 42% as low-risk, while 40% could not be clearly classified at all. The unclear cases were concentrated in critical infrastructure, employment, law enforcement, and product safety [28]. This means the actual proportion of high-risk systems could range from 18% to 58% depending on interpretation. The European Commission originally estimated 5–15% of AI systems would be high-risk [28]; the reality appears to be significantly more complex.

The classification uncertainty comes from multiple sources. Annex III lists use cases in broad categories, but the boundaries are vague. Article 6(3) added exemptions for systems performing "narrow procedural tasks" or "preparatory" functions, but deciding whether an AI system is merely preparatory versus actually influencing a decision is often a judgement call with no clear benchmark.

This is exactly what the Article 6(5) guidelines, the ones the Commission missed its deadline to publish, were designed to resolve. Those guidelines were supposed to include "a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk" [6]. Without them, organisations in grey-area use cases are forced to either classify conservatively (treating borderline systems as high-risk at significant compliance cost) or accept the legal risk of under-classifying.

Academic research confirms this is a real operational problem. A 2026 case study interviewing companies attempting to comply found recurring uncertainties, with interviewees voicing frustration about unclear details that are "supposed to be clarified in the upcoming harmonised standards" [29].

The appliedAI study and risk classification database are available at appliedai.de (AI Act: Risk Classification of AI Systems from a Practical Perspective).

What high-risk obligations actually require

The key date is August 2, 2026, when the full obligations for Annex III high-risk systems become applicable. From that day, non-compliant systems are technically in violation. There is no grace period [1].

Providers of high-risk systems must implement: risk management across the full AI lifecycle, data governance, technical documentation, automatic event logging, human oversight mechanisms, conformity assessments, CE marking, and registration in the EU database.

Deployers must ensure: proper system use, human oversight, fundamental rights impact assessments, and incident reporting [1].

2. Who Must Comply

The AI Act applies extraterritorially, the same model as GDPR. It does not matter where an organisation is headquartered. Three categories of organisations are covered [1][18]:

A technology company in the United States using AI for loan approvals that serves European customers is in scope. A recruitment firm in Asia using AI-powered CV screening for EU-based roles is in scope. A startup anywhere in the world deploying a chatbot that interacts with EU residents is in scope.

Many organisations that do not consider themselves "AI companies" will discover they have obligations, particularly those using third-party AI tools for hiring, credit assessment, customer service, or content generation. Using someone else's AI system does not exempt an organisation; deployers carry their own distinct set of legal responsibilities under the Act [1].

Fines

The penalties are structured in three tiers [19]:

ViolationMaximum fine
Prohibited practices€35 million or 7% of global turnover (whichever is higher)
High-risk violations€15 million or 3% of global turnover (whichever is higher)
Information failures€7.5 million or 1% of global turnover (whichever is higher)

For SMEs and startups, the Act provides that the lower of the fixed amount or percentage applies instead, offering meaningful protection for smaller organisations [19].

Beyond fines, authorities can order organisations to stop using non-compliant AI systems entirely.

3. Where Things Stand: Missed Deadlines, the Guidance Gap, and the Digital Omnibus

As of March 2026, the AI Act's implementation is in a state of tension. The compliance deadline is approaching, but the official infrastructure organisations need to comply is not ready.

What has already taken effect

February 2, 2025: Prohibited AI practices became applicable [3][4]. Penalties became enforceable from August 2, 2025 [1]. No public enforcement actions have been announced as of March 2026, though investigations are reportedly underway.

August 2, 2025: Obligations for general-purpose AI model providers took effect. The penalty framework under Article 99 also became active on this date, and member states were required to have designated their national enforcement authorities [1].

January 1, 2026: Finland became the first member state with operational national enforcement powers [22]. Spain's dedicated AI Supervisory Agency (AESIA) has been operational since mid-2024 [23]. Italy adopted the EU's first national AI law (Law 132/2025) in October 2025, introducing criminal penalties of 1–5 years imprisonment for harmful deepfake dissemination [24].

What has been missed

The Commission missed its own deadline. Article 6(5) required the Commission to publish guidelines on high-risk classification, including practical examples, by February 2, 2026. This deadline was missed. The Commission indicated it was still integrating months of stakeholder feedback and planned to release a draft for further consultation [6][7].

Harmonised standards are delayed. CEN and CENELEC, the European standardisation bodies tasked with developing the technical benchmarks for demonstrating compliance, missed their late-2025 delivery target. Standards are now expected by end of 2026 at the earliest [7][8][25]. Under Article 40, organisations that follow harmonised standards receive a "presumption of conformity," effectively a legal safe harbour. Without published standards, that safe harbour does not exist.

The Commission has not used its backup mechanism. Article 41 allows the Commission to establish "common specifications" as interim guidance when standards are delayed. As of March 2026, the Commission has not publicly indicated any intention to do so [25].

Additional guidelines are still pending. The Commission has announced it will publish guidance on high-risk classification, transparency requirements, incident reporting, provider and deployer obligations, fundamental rights impact assessments, and post-market monitoring templates throughout 2026 [9]. None of these are finalised.

The Digital Omnibus

On November 19, 2025, the Commission proposed the Digital Omnibus package, a set of amendments that would push back the high-risk compliance deadlines [10][11].

The core mechanism: high-risk obligations would not take effect until the Commission confirms that adequate compliance support tools (harmonised standards, common specifications, guidelines) are available. Once confirmed, obligations apply after 6 months for Annex III systems and 12 months for Annex I systems.

Regardless of when standards arrive, backstop deadlines would cap the maximum delay [12][13]:

System TypeBackstop deadline
Annex III (stand-alone high-risk systems)December 2, 2027
Annex I (AI embedded in regulated products)August 2, 2028

Where the Digital Omnibus stands now (March 2026)

The process is moving faster than most observers anticipated. Two major developments occurred in the past week:

March 11: MEPs reached a preliminary political agreement on the AI Omnibus amendments during a shadow meeting. The deal will be put to a formal committee vote (LIBE and IMCO committees) on March 18, 2026, two days from the date of this article. Rapporteur Michael McNamara confirmed that "some technical negotiations are still ongoing" ahead of the vote [30].

March 13: The Council agreed its negotiating mandate (known as a "general approach"). The Cyprus Presidency stated: "We worked on this proposal with urgency, reaching a swift agreement to facilitate the timely application of the AI act" [31]. Following this approval, the presidency indicated it will start negotiations with the European Parliament.

Both the Parliament and Council positions align on the key points: fixed deadlines of December 2, 2027 for Annex III systems and August 2, 2028 for Annex I systems. Both also include a new prohibition on AI systems generating non-consensual sexual and intimate deepfakes [30][31].

With both institutions now holding agreed positions, trilogue negotiations can begin as soon as the Parliament committee vote passes and plenary endorses it. This is significantly ahead of the "spring or summer 2026" timeline that was widely expected just weeks ago.

What this means in practice: the postponement of high-risk deadlines is now very likely, but it is not yet law. The trilogue must produce a compromise text, both institutions must formally adopt it, and it must be published in the Official Journal before it takes legal effect. Whether all of this can be completed before August 2, 2026 remains uncertain, though the political alignment makes it considerably more plausible than it was a month ago [26][27].

The Commission itself has acknowledged the problem directly: harmonised standards are "decisive for legal certainty" and their delayed availability "puts at jeopardy the successful entry into application of the high-risk rules on 2 August 2026" [8].

4. What Organisations Should Prepare, With or Without Official Guidance

Regardless of whether the August 2026 deadline holds or shifts, the underlying obligations are not going away. The requirements are written directly into the regulation (Articles 9–15) and do not depend on guidelines or standards being published first. Organisations that begin preparation now will be in a stronger position under either timeline scenario.

1 – Conduct an AI inventory and risk classification. Identify every AI system developed or deployed within the organisation. Determine which fall under Annex III high-risk categories or Annex I product safety rules. Given that 40% of enterprise AI systems fall into an unclear classification zone [28], organisations should document their reasoning for each classification decision, including borderline cases. For systems that cannot be clearly classified, the prudent approach is to either treat them as high-risk or seek guidance from a national supervisory authority. [5][6][28]

2 – Withdraw or redesign any prohibited AI systems. The ban on prohibited practices has been applicable since February 2, 2025. Organisations still operating manipulative AI, social scoring, workplace or school emotion recognition (without medical or safety justification), untargeted facial scraping, or individual criminal risk prediction based solely on profiling must discontinue these immediately. [3][4]

3 – Assemble documentation for high-risk systems. The Act requires [1]:

4 – Determine the conformity assessment route. For most stand-alone high-risk systems, organisations conduct an internal self-assessment under Annex VI. Systems performing biometric identification require a third-party assessment from a notified body under Annex VII. These bodies are already experiencing booking delays as of March 2026; organisations requiring third-party assessments should engage assessors without delay. [1][17]

5 – Establish governance infrastructure. Designate a person or team responsible for AI compliance. Develop company-wide AI policies. Implement staff training programmes. Create incident reporting procedures. This is a permanent operational requirement, not a one-time exercise. [1]

6 – Prepare for transparency obligations. Chatbots must inform users they are interacting with AI. Deepfakes and synthetic content must be labelled. Emotion recognition and biometric categorisation systems require user notification. These obligations take effect August 2, 2026 regardless of the Digital Omnibus. [1]

The best practical resource available now

In the absence of EU-level guidance, Spain's AI Supervisory Agency (AESIA) published 16 detailed, practical compliance guides in December 2025. Developed through a real regulatory sandbox with actual companies, the guides are structured as a progressive roadmap [14][15][16]:

Legal commentators note the guides may influence how other national regulators approach enforcement across Europe.

Available in English at aesia.digital.gob.es/en/present/resources/practical-guides-for-ai-act-compliance.

5. How Enforcement Will Work

The AI Act follows the same decentralised enforcement model as GDPR. For organisations that have been through GDPR compliance, the mechanics will feel familiar.

GDPR enforcement: a useful preview

Every EU member state has a Data Protection Authority (DPA). When GDPR took effect in 2018, governments did not send inspectors to every business on day one. Enforcement was primarily reactive: someone filed a complaint, or a company self-reported a data breach, and the DPA investigated. Over time, DPAs added proactive sector audits.

Since 2018, over 2,800 fines have been issued [20]. The largest penalties arrived years later: Meta's €1.2 billion fine came in 2023, five years after GDPR launched. In practice, most investigations start because someone files a complaint [21].

How AI Act enforcement will mirror this

National market surveillance authorities are the primary enforcers, just as DPAs are for GDPR. Each member state designates its own. Finland, Spain, and Italy already have operational enforcement bodies [22][23][24]. Other member states are expected to follow throughout 2026.

Investigations will be triggered by complaints from individuals (Article 85 gives any person the right to lodge a complaint [1]), incident reports from organisations themselves, whistleblower disclosures, media reporting, or proactive sector audits.

When an investigation is opened, authorities can request documentation, inspect AI systems, and assess compliance. Outcomes range from case closure (if compliant), through warnings, remediation orders, orders to cease using the system, and administrative fines.

The EU AI Office adds a centralised enforcement layer that GDPR does not have. It directly oversees providers of general-purpose AI models and can impose fines of up to €15 million or 3% of global turnover [1].

Three lessons from GDPR that will apply here

1 – Complaints drive enforcement. A single rejected job applicant, denied loan customer, unhappy employee, or activist group can trigger an investigation. An organisation does not need to be large to attract scrutiny.

2 – Documented effort dramatically reduces risk. Under GDPR, organisations that could demonstrate policies, training, and governance structures (even imperfect ones) received significantly lower penalties than those that had done nothing. The AI Act applies the same principle: Article 99(7) requires authorities to consider cooperation, good faith, and prior compliance effort when determining fines [19].

3 – The "stop using it" order is the real threat for smaller organisations. Being ordered to shut down an AI system that runs a core business function (hiring, credit scoring, customer service) can be more damaging than any warning. Under GDPR, regulators have ordered companies to stop processing data entirely. AI Act authorities have the same power.

The enforcement gap problem

There is an open question about whether authorities will aggressively enforce high-risk obligations if harmonised standards and Commission guidelines are still missing on August 2, 2026.

Legally, enforcement is possible: the obligations are in the regulation regardless of whether standards exist. But practically, fining an organisation for non-compliance when the official compliance benchmarks have not been published creates legal vulnerability for authorities. Any penalties could be challenged in court on the basis that the organisation could not reasonably know what compliance looked like.

The realistic expectation: early enforcement will focus on clear-cut violations (prohibited practices, complete absence of documentation, total non-compliance) rather than grey-area questions about what "adequate" risk management looks like without a published standard. The 40% classification grey zone identified by appliedAI [28] makes aggressive enforcement on borderline cases particularly difficult for authorities to justify.

Organisations that can show they made genuine, documented efforts to comply, including documented reasoning for classification decisions, will be in a far stronger position than those that did nothing.

The Bottom Line

The EU AI Act is law. The high-risk obligations formally apply from August 2, 2026. As of this week, both the European Parliament and Council have agreed positions that would postpone these deadlines to December 2027 and August 2028, and trilogue negotiations are imminent. The postponement is now very likely, but not yet legally certain. The official guidance and harmonised standards organisations need to comply fully are still not available. Enforcement infrastructure is already active in several member states.

This is an uncomfortable position, but GDPR's history offers a clear lesson: the organisations that started preparing early, even imperfectly, even without complete guidance, fared far better than those that waited.

Organisations do not need to have everything figured out. They need to have started.

Sources

This analysis has been independently cross-validated against the text of Regulation (EU) 2024/1689, official European Commission publications, the European Parliament's legislative train, national government announcements, and legal analyses from international law firms.

Start a conversation

Have a problem shaped like this one?

Keep reading

More from the team.

FabCon 2026: Strategic Intelligence Report
MAR 26 · 2026·REPORT

FabCon 2026: Strategic Intelligence Report

Read →
Integrating AI into an existing process for a Finance Advisory Office to reduce workload on advisors and staff
AUG 18 · 2025·CASE STUDY

Integrating AI into an existing process for a Finance Advisory Office to reduce workload on advisors and staff

Read →
Start with a 30-minute call

Bring us the toughest
problem on your roadmap.

We'll scope it honestly, tell you what's realistic, and show you exactly how we'd ship it.

contact@braynex.ai