{"id":3459,"date":"2026-03-16T11:29:17","date_gmt":"2026-03-16T11:29:17","guid":{"rendered":"https:\/\/braynex.ai\/?p=3459"},"modified":"2026-03-16T12:47:56","modified_gmt":"2026-03-16T12:47:56","slug":"eu-ai-act-high-risk-deadline-who-it-concerns-and-recent-developments","status":"publish","type":"post","link":"https:\/\/braynex.ai\/de\/eu-ai-act-high-risk-deadline-who-it-concerns-and-recent-developments\/","title":{"rendered":"EU AI Act High Risk Deadline &#8211; Who it concerns and recent developments"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">EU AI Act: The Deadline Is Set. The Guidance Isn&#8217;t.<\/h2>\n\n\n\n<p>The EU AI Act and The High Risk obligations deadline: Missing guidelines regarding high-risk classification<br>and recent developments following the EU AI Act.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. What Is the EU AI Act, and What Makes AI &#8220;High-Risk&#8221;?<\/h2>\n\n\n\n<p>The EU Artificial Intelligence Act (Regulation (EU) 2024\/1689) is the world&#8217;s first comprehensive AI<br>regulation. It entered into force on August 1, 2024 and applies a risk-based framework: the higher the<br>potential harm an AI system can cause, the stricter the rules [1][2].<\/p>\n\n\n\n<p>The Act classifies AI into four tiers:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prohibited (banned outright)<\/li>\n\n\n\n<li>High-risk (heavily regulated)<\/li>\n\n\n\n<li>Limited risk (transparency obligations)<\/li>\n\n\n\n<li>Minimal risk (no specific obligations)<\/li>\n<\/ul>\n\n\n\n<p>Most of the regulatory weight, and most of the compliance burden, falls on high-risk AI systems.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What counts as high-risk<\/h3>\n\n\n\n<p>An AI system is classified as high-risk if it falls under Annex III of the Act [5], which covers:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Employment<\/strong>: recruitment screening, CV parsing, promotion decisions, worker monitoring<\/li>\n\n\n\n<li><strong>Education<\/strong>: admissions decisions, exam scoring, monitoring student behaviour during tests<\/li>\n\n\n\n<li><strong>Essential services<\/strong>: creditworthiness assessment, insurance eligibility, emergency services<br>dispatch<\/li>\n\n\n\n<li><strong>Biometrics<\/strong>: facial recognition, emotion recognition, biometric categorisation<\/li>\n\n\n\n<li><strong>Critical infrastructure<\/strong>: AI managing energy, water, transport, or digital systems<\/li>\n\n\n\n<li><strong>Law enforcement<\/strong>: risk assessment of individuals, crime analytics, evidence evaluation<\/li>\n\n\n\n<li><strong>Migration and border control<\/strong>: automated visa processing, asylum application assessment<\/li>\n\n\n\n<li><strong>Justice and democratic processes<\/strong>: AI used to assist court rulings<\/li>\n<\/ul>\n\n\n\n<p>A separate high-risk category under Annex I covers AI systems embedded as safety components in<br>regulated products, such as medical devices, machinery, vehicles, and aviation equipment. This category<br>has its own compliance timeline [1].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The classification problem<\/h3>\n\n\n\n<p>In theory, the categories above should make it straightforward to determine whether an AI system is<br>high-risk. In practice, it is not.<\/p>\n\n\n\n<p>A study by appliedAI examining 106 real enterprise AI systems found that only 18% could be clearly<br>classified as high-risk and 42% as low-risk, while 40% could not be clearly classified at all. The unclear<br>cases were concentrated in critical infrastructure, employment, law enforcement, and product safety<br>[28]. This means the actual proportion of high-risk systems could range from 18% to 58% depending on<br>interpretation. The European Commission originally estimated 5-15% of AI systems would be high-risk<br>[28]; the reality appears to be significantly more complex.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"565\" src=\"https:\/\/braynex.ai\/wp-content\/uploads\/2026\/03\/appliedAI-study-results-1024x565.png\" alt=\"\" class=\"wp-image-3465\" style=\"width:500px;height:auto\" srcset=\"https:\/\/braynex.ai\/wp-content\/uploads\/2026\/03\/appliedAI-study-results-1024x565.png 1024w, https:\/\/braynex.ai\/wp-content\/uploads\/2026\/03\/appliedAI-study-results-300x166.png 300w, https:\/\/braynex.ai\/wp-content\/uploads\/2026\/03\/appliedAI-study-results-768x424.png 768w, https:\/\/braynex.ai\/wp-content\/uploads\/2026\/03\/appliedAI-study-results-1536x848.png 1536w, https:\/\/braynex.ai\/wp-content\/uploads\/2026\/03\/appliedAI-study-results-2048x1130.png 2048w, https:\/\/braynex.ai\/wp-content\/uploads\/2026\/03\/appliedAI-study-results-18x10.png 18w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>The classification uncertainty comes from multiple sources. Annex III lists use cases in broad categories,<br>but the boundaries are vague. Article 6(3) added exemptions for systems performing &#8220;narrow procedural<br>tasks&#8221; or &#8220;preparatory&#8221; functions, but deciding whether an AI system is merely preparatory versus<br>actually influencing a decision is often a judgement call with no clear benchmark.<\/p>\n\n\n\n<p>This is exactly what the Article 6(5) guidelines, the ones the Commission missed its deadline to publish,<br>were designed to resolve. Those guidelines were supposed to include &#8220;a comprehensive list of practical<br>examples of use cases of AI systems that are high-risk and not high-risk&#8221; [6]. Without them, organisations<br>in grey-area use cases are forced to either classify conservatively (treating borderline systems as high-risk<br>at significant compliance cost) or accept the legal risk of under-classifying.<\/p>\n\n\n\n<p>Academic research confirms this is a real operational problem. A 2026 case study interviewing<br>companies attempting to comply found recurring uncertainties, with interviewees voicing frustration<br>about unclear details that are &#8220;supposed to be clarified in the upcoming harmonised standards&#8221; [29].<\/p>\n\n\n\n<p>The appliedAI study and risk classification database are available at: <a href=\"https:\/\/www.appliedai.de\/en\/ai-resources\/white-papers\/ai-act-risk-classification-of-ai-systems-from-a-practical-perspective\/\" title=\"\"><a href=\"https:\/\/www.appliedai.de\/en\/ai-resources\/white-papers\/ai-act-risk-classification-of-ai-systems-from-a-practical-perspective\/\">AI Act: Risk Classification of AI Systems from a Practical Perspective<\/a><\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What high-risk obligations actually require<\/h3>\n\n\n\n<p>The key date is August 2, 2026, when the full obligations for Annex III high-risk systems become<br>applicable. From that day, non-compliant systems are technically in violation. There is no grace period<br>[1].<\/p>\n\n\n\n<p>Providers of high-risk systems must implement: risk management across the full AI lifecycle, data<br>governance, technical documentation, automatic event logging, human oversight mechanisms,<br>conformity assessments, CE marking, and registration in the EU database.<\/p>\n\n\n\n<p>Deployers must ensure: proper system use, human oversight, fundamental rights impact assessments,<br>and incident reporting [1].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2. Who Must Comply<\/h2>\n\n\n\n<p>The AI Act applies extraterritorially, the same model as GDPR. It does not matter where an organisation<br>is headquartered. Three categories of organisations are covered [1][18]:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Providers<\/strong>: organisations that develop AI systems (or have them developed) and place them on<br>the EU market under their own name or trademark.<\/li>\n\n\n\n<li><strong>Deployers<\/strong>: organisations that use AI systems in a professional capacity within the EU.<\/li>\n\n\n\n<li><strong>Any organisation whose AI outputs affect people in the EU<\/strong>, even if the AI runs on servers<br>outside Europe.<\/li>\n<\/ul>\n\n\n\n<p>A technology company in the United States using AI for loan approvals that serves European customers is<br>in scope. A recruitment firm in Asia using AI-powered CV screening for EU-based roles is in scope. A<br>startup anywhere in the world deploying a chatbot that interacts with EU residents is in scope.<\/p>\n\n\n\n<p>Many organisations that do not consider themselves &#8220;AI companies&#8221; will discover they have obligations,<br>particularly those using third-party AI tools for hiring, credit assessment, customer service, or content<br>generation. Using someone else&#8217;s AI system does not exempt an organisation; deployers carry their own<br>distinct set of legal responsibilities under the Act [1].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Fines<\/h3>\n\n\n\n<p>The penalties are structured in three tiers [19]:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\"><strong>Violation<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Maximum fine<\/strong><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Prohibited practices<\/td><td class=\"has-text-align-center\" data-align=\"center\">\u20ac35 million or 7% of global turnover (whichever is higher)<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">High-risk violations<\/td><td class=\"has-text-align-center\" data-align=\"center\">\u20ac15 million or 3% of global turnover (whichever is higher)<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Information failures<\/td><td class=\"has-text-align-center\" data-align=\"center\">\u20ac7.5 million or 1% of global turnover (whichever is higher)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>For SMEs and startups, the Act provides that the lower of the fixed amount or percentage applies<br>instead, offering meaningful protection for smaller organisations [19].<\/p>\n\n\n\n<p>Beyond fines, authorities can order organisations to stop using non-compliant AI systems entirely.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3. Where Things Stand: Missed Deadlines, the Guidance Gap, and the Digital Omnibus<\/h2>\n\n\n\n<p>As of March 2026, the AI Act&#8217;s implementation is in a state of tension. The compliance deadline is<br>approaching, but the official infrastructure organisations need to comply is not ready.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What has already taken effect<\/h3>\n\n\n\n<p><strong>February 2, 2025<\/strong>: Prohibited AI practices became applicable [3][4]. Penalties became enforceable from<br>August 2, 2025 [1]. No public enforcement actions have been announced as of March 2026, though<br>investigations are reportedly underway.<\/p>\n\n\n\n<p><strong>August 2, 2025<\/strong>: Obligations for general-purpose AI model providers took effect. The penalty framework<br>under Article 99 also became active on this date, and member states were required to have designated<br>their national enforcement authorities [1].<\/p>\n\n\n\n<p><strong>January 1, 2026<\/strong>: Finland became the first member state with operational national enforcement powers<br>[22]. Spain&#8217;s dedicated AI Supervisory Agency (AESIA) has been operational since mid-2024 [23]. Italy<br>adopted the EU&#8217;s first national AI law (Law 132\/2025) in October 2025, introducing criminal penalties of<br>1-5 years imprisonment for harmful deepfake dissemination [24].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What has been missed<\/h3>\n\n\n\n<p><strong>The Commission missed its own deadline<\/strong>. Article 6(5) required the Commission to publish guidelines on<br>high-risk classification, including practical examples, by February 2, 2026. This deadline was missed. The<br>Commission indicated it was still integrating months of stakeholder feedback and planned to release a<br>draft for further consultation [6][7].<\/p>\n\n\n\n<p><strong>Harmonised standards are delayed<\/strong>. CEN and CENELEC, the European standardisation bodies tasked with<br>developing the technical benchmarks for demonstrating compliance, missed their late-2025 delivery<br>target. Standards are now expected by end of 2026 at the earliest [7][8][25]. Under Article 40,<br>organisations that follow harmonised standards receive a &#8220;presumption of conformity,&#8221; effectively a legal<br>safe harbour. Without published standards, that safe harbour does not exist.<\/p>\n\n\n\n<p><strong>The Commission has not used its backup mechanism<\/strong>. Article 41 allows the Commission to establish<br>&#8220;common specifications&#8221; as interim guidance when standards are delayed. As of March 2026, the<br>Commission has not publicly indicated any intention to do so [25].<\/p>\n\n\n\n<p><strong>Additional guidelines are still pending<\/strong>. The Commission has announced it will publish guidance on high\u0002risk classification, transparency requirements, incident reporting, provider and deployer obligations,<br>fundamental rights impact assessments, and post-market monitoring templates throughout 2026 [9].<br>None of these are finalised.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Digital Omnibus<\/h3>\n\n\n\n<p>On November 19, 2025, the Commission proposed the Digital Omnibus package, a set of amendments<br>that would push back the high-risk compliance deadlines [10][11].<\/p>\n\n\n\n<p>The core mechanism: high-risk obligations would not take effect until the Commission confirms that<br>adequate compliance support tools (harmonised standards, common specifications, guidelines) are<br>available. Once confirmed, obligations apply after 6 months for Annex III systems and 12 months for<br>Annex I systems.<\/p>\n\n\n\n<p>Regardless of when standards arrive, backstop deadlines would cap the maximum delay [12][13]:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\"><strong>System Type<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Backstop deadline<\/strong><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Annex III (stand-alone high-risk systems)<\/td><td class=\"has-text-align-center\" data-align=\"center\">December 2, 2027<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Annex I (AI embedded in regulated products)<\/td><td class=\"has-text-align-center\" data-align=\"center\">August 2, 2028<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Where the Digital Omnibus stands now (March 2026):<\/h3>\n\n\n\n<p>The process is moving faster than most observers anticipated. Two major developments occurred in<br>the past week:<\/p>\n\n\n\n<p><strong>March 11<\/strong>: MEPs reached a preliminary political agreement on the AI Omnibus amendments during a<br>shadow meeting. The deal will be put to a formal committee vote (LIBE and IMCO committees)<br>on March 18, 2026, two days from the date of this article. Rapporteur Michael McNamara confirmed<br>that &#8220;some technical negotiations are still ongoing&#8221; ahead of the vote [30].<\/p>\n\n\n\n<p><strong>March 13<\/strong>: The Council agreed its negotiating mandate (known as a &#8220;general approach&#8221;). The Cyprus<br>Presidency stated: &#8220;We worked on this proposal with urgency, reaching a swift agreement to facilitate<br>the timely application of the AI act&#8221; [31]. Following this approval, the presidency indicated it will start<br>negotiations with the European Parliament.<\/p>\n\n\n\n<p>Both the Parliament and Council positions align on the key points: fixed deadlines of December 2,<br>2027 for Annex III systems and August 2, 2028 for Annex I systems. Both also include a new prohibition<br>on AI systems generating non-consensual sexual and intimate deepfakes [30][31].<\/p>\n\n\n\n<p>With both institutions now holding agreed positions, trilogue negotiations can begin as soon as the<br>Parliament committee vote passes and plenary endorses it. This is significantly ahead of the &#8220;spring or<br>summer 2026&#8243; timeline that was widely expected just weeks ago.<\/p>\n\n\n\n<p>What this means in practice: the postponement of high-risk deadlines is now very likely, but it is not<br>yet law. The trilogue must produce a compromise text, both institutions must formally adopt it, and it<br>must be published in the Official Journal before it takes legal effect. Whether all of this can be<br>completed before August 2, 2026 remains uncertain, though the political alignment makes it<br>considerably more plausible than it was a month ago [26][27].<\/p>\n\n\n\n<p>The Commission itself has acknowledged the problem directly: harmonised standards are &#8220;decisive for<br>legal certainty&#8221; and their delayed availability &#8220;puts at jeopardy the successful entry into application of<br>the high-risk rules on 2 August 2026&#8243; [8].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">4. What Organisations Should Prepare, With or Without Official Guidance<\/h2>\n\n\n\n<p>Regardless of whether the August 2026 deadline holds or shifts, the underlying obligations are not going away. The requirements are written directly into the regulation (Articles 9-15) and do not depend on guidelines or standards being published first. Organisations that begin preparation now will be in a stronger position under either timeline scenario.<\/p>\n\n\n\n<p><strong>1 &#8211;<\/strong> <strong data--h-bstatus=\"0OBSERVED\">Conduct an AI inventory and risk classification<\/strong>. Identify every AI system developed or deployed<br>within the organisation. Determine which fall under Annex III high-risk categories or Annex I product<br>safety rules. Given that 40% of enterprise AI systems fall into an unclear classification zone [28],<br>organisations should document their reasoning for each classification decision, including borderline<br>cases. For systems that cannot be clearly classified, the prudent approach is to either treat them as high\u0002risk or seek guidance from a national supervisory authority. [5][6][28]<\/p>\n\n\n\n<p><strong>2 &#8211; Withdraw or redesign any prohibited AI systems<\/strong>. The ban on prohibited practices has been applicable<br>since February 2, 2025. Organisations still operating manipulative AI, social scoring, workplace or school<br>emotion recognition (without medical or safety justification), untargeted facial scraping, or individual<br>criminal risk prediction based solely on profiling must discontinue these immediately. [3][4]<\/p>\n\n\n\n<p><strong>3 &#8211; Assemble documentation for high-risk systems<\/strong>. The Act requires [1]:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A risk management process covering the full AI lifecycle (Article 9)<\/li>\n\n\n\n<li>Data governance procedures for training and testing data (Article 10)<\/li>\n\n\n\n<li>Technical documentation per Annex IV (Article 11)<\/li>\n\n\n\n<li>Automatic event logging (Article 12)<\/li>\n\n\n\n<li>Human oversight mechanisms enabling meaningful human intervention (Article 14)<\/li>\n<\/ul>\n\n\n\n<p><strong>4 &#8211; Determine the conformity assessment route<\/strong>. For most stand-alone high-risk systems, organisations<br>conduct an internal self-assessment under Annex VI. Systems performing biometric identification require<br>a third-party assessment from a notified body under Annex VII. These bodies are already experiencing<br>booking delays as of March 2026; organisations requiring third-party assessments should engage<br>assessors without delay. [1][17]<\/p>\n\n\n\n<p><strong>5 &#8211; Establish governance infrastructure<\/strong>. Designate a person or team responsible for AI compliance.<br>Develop company-wide AI policies. Implement staff training programmes. Create incident reporting<br>procedures. This is a permanent operational requirement, not a one-time exercise. [1]<\/p>\n\n\n\n<p><strong>6 &#8211; Prepare for transparency obligations<\/strong>. Chatbots must inform users they are interacting with AI.<br>Deepfakes and synthetic content must be labelled. Emotion recognition and biometric categorisation<br>systems require user notification. These obligations take effect August 2, 2026 regardless of the Digital<br>Omnibus. [1]<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The best practical resource available now<\/h3>\n\n\n\n<p>In the absence of EU-level guidance, Spain&#8217;s AI Supervisory Agency (AESIA) published 16 detailed,<br>practical compliance guides in December 2025. Developed through a real regulatory sandbox with actual<br>companies, the guides are structured as a progressive roadmap [14][15][16]:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Guides 1-2 (Introductory)<\/strong>: Overview of the regulation, practical examples, decision trees<\/li>\n\n\n\n<li><strong>Guides 3-15 (Technical)<\/strong>: One guide per obligation, from conformity assessment to cybersecurity<\/li>\n\n\n\n<li><strong>Guide 16 + 13 Excel checklists<\/strong>: Self-assessment methodology to identify compliance gaps<\/li>\n<\/ul>\n\n\n\n<p>Legal commentators note the guides may influence how other national regulators approach enforcement<br>across Europe.<\/p>\n\n\n\n<p>Available in English at: <a href=\"http:\/\/aesia.digital.gob.es\/en\/present\/resources\/practical-guides-for-ai-act-compliance\" title=\"\">aesia.digital.gob.es\/en\/present\/resources\/practical-guides-for-ai-act-compliance<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">5. How Enforcement Will Work<\/h2>\n\n\n\n<p>The AI Act follows the same decentralised enforcement model as GDPR. For organisations that have<br>been through GDPR compliance, the mechanics will feel familiar.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">GDPR enforcement: a useful preview<\/h3>\n\n\n\n<p>Every EU member state has a Data Protection Authority (DPA). When GDPR took effect in 2018,<br>governments did not send inspectors to every business on day one. Enforcement was primarily reactive:<br>someone filed a complaint, or a company self-reported a data breach, and the DPA investigated. Over<br>time, DPAs added proactive sector audits.<\/p>\n\n\n\n<p>Since 2018, over 2,800 fines have been issued [20]. The largest penalties arrived years later: Meta&#8217;s \u20ac1.2<br>billion fine came in 2023, five years after GDPR launched. In practice, most investigations start because<br>someone files a complaint [21].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How AI Act enforcement will mirror this<\/h3>\n\n\n\n<p><strong>National market surveillance authorities<\/strong> are the primary enforcers, just as DPAs are for GDPR. Each<br>member state designates its own. Finland, Spain, and Italy already have operational enforcement bodies<br>[22][23][24]. Other member states are expected to follow throughout 2026.<\/p>\n\n\n\n<p><strong>Investigations will be triggered by<\/strong> complaints from individuals (Article 85 gives any person the right to<br>lodge a complaint [1]), incident reports from organisations themselves, whistleblower disclosures, media<br>reporting, or proactive sector audits.<\/p>\n\n\n\n<p><strong>When an investigation is opened<\/strong>, authorities can request documentation, inspect AI systems, and assess<br>compliance. Outcomes range from case closure (if compliant), through warnings, remediation orders,<br>orders to cease using the system, and administrative fines.<\/p>\n\n\n\n<p><strong>The EU AI Office<\/strong> adds a centralised enforcement layer that GDPR does not have. It directly oversees<br>providers of general-purpose AI models and can impose fines of up to \u20ac15 million or 3% of global<br>turnover [1].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Three lessons from GDPR that will apply here<\/h3>\n\n\n\n<p><strong>1 &#8211; Complaints drive enforcement<\/strong>. A single rejected job applicant, denied loan customer, unhappy<br>employee, or activist group can trigger an investigation. An organisation does not need to be large to<br>attract scrutiny.<\/p>\n\n\n\n<p><strong>2 &#8211; Documented effort dramatically reduces risk<\/strong>. Under GDPR, organisations that could demonstrate<br>policies, training, and governance structures (even imperfect ones) received significantly lower penalties<br>than those that had done nothing. The AI Act applies the same principle: Article 99(7) requires<br>authorities to consider cooperation, good faith, and prior compliance effort when determining fines [19].<\/p>\n\n\n\n<p><strong>3 &#8211; The &#8220;stop using it&#8221; order is the real threat for smaller organisations<\/strong>. Being ordered to shut down an<br>AI system that runs a core business function (hiring, credit scoring, customer service) can be more<br>damaging than any warning. Under GDPR, regulators have ordered companies to stop processing data<br>entirely. AI Act authorities have the same power.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The enforcement gap problem<\/h3>\n\n\n\n<p>There is an open question about whether authorities will aggressively enforce high-risk obligations if<br>harmonised standards and Commission guidelines are still missing on August 2, 2026.<\/p>\n\n\n\n<p>Legally, enforcement is possible: the obligations are in the regulation regardless of whether standards<br>exist. But practically, fining an organisation for non-compliance when the official compliance benchmarks<br>have not been published creates legal vulnerability for authorities. Any penalties could be challenged in<br>court on the basis that the organisation could not reasonably know what compliance looked like.<\/p>\n\n\n\n<p>The realistic expectation: early enforcement will focus on clear-cut violations (prohibited practices,<br>complete absence of documentation, total non-compliance) rather than grey-area questions about what<br>&#8220;adequate&#8221; risk management looks like without a published standard. The 40% classification grey zone<br>identified by appliedAI [28] makes aggressive enforcement on borderline cases particularly difficult for<br>authorities to justify.<\/p>\n\n\n\n<p>Organisations that can show they made genuine, documented efforts to comply, including documented<br>reasoning for classification decisions, will be in a far stronger position than those that did nothing.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Bottom Line<\/h2>\n\n\n\n<p>The EU AI Act is law. The high-risk obligations formally apply from August 2, 2026. As of this week,<br>both the European Parliament and Council have agreed positions that would postpone these<br>deadlines to December 2027 and August 2028, and trilogue negotiations are imminent. The<br>postponement is now very likely, but not yet legally certain. The official guidance and harmonised<br>standards organisations need to comply fully are still not available. Enforcement infrastructure is<br>already active in several member states.<\/p>\n\n\n\n<p>This is an uncomfortable position, but GDPR&#8217;s history offers a clear lesson: the organisations that<br>started preparing early, even imperfectly, even without complete guidance, fared far better than those<br>that waited.<\/p>\n\n\n\n<p>Organisations do not need to have everything figured out. They need to have started.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Sources<\/h2>\n\n\n\n<p>[1] Regulation (EU) 2024\/1689 of the European Parliament and of the Council (Artificial Intelligence Act),<br>Official Journal of the European Union, July 12, 2024. Articles referenced: 2, 5, 6, 9-17, 26, 40, 41, 43, 50,<br>85, 88-93, 99, 111, 113, Annexes I, III-VII.<br><br>[2] European Commission, &#8220;AI Act,&#8221; digital-strategy.ec.europa.eu\/en\/policies\/regulatory-framework-ai<br><br>[3] Article 5, Regulation (EU) 2024\/1689 (Prohibited AI Practices).<br><br>[4] European Commission, &#8220;Guidelines on Prohibited Artificial Intelligence Practices,&#8221; published February<br>4, 2025, digital-strategy.ec.europa.eu<br><br>[5] Annex III, Regulation (EU) 2024\/1689 (High-Risk AI Systems).<br><br>[6] Article 6(5), Regulation (EU) 2024\/1689 (Classification Rules for High-Risk AI Systems).<br><br>[7] IAPP, &#8220;European Commission misses deadline for AI Act guidance on high-risk systems,&#8221; published<br>February 3, 2026, iapp.org<br><br>[8] European Commission, &#8220;Navigating the AI Act&#8221; FAQ, digital-strategy.ec.europa.eu\/en\/faqs\/navigating-ai-act<br><br>[9] European Commission, &#8220;Supporting the implementation of the AI Act with clear guidelines,&#8221; digital-strategy.ec.europa.eu<br><br>[10] European Parliament Legislative Train Schedule, &#8220;Digital Omnibus on AI,&#8221;<br>europarl.europa.eu\/legislative-train<br><br>[11] Jones Day, &#8220;EU Digital Omnibus: How EU Data, Cyber, and AI Rules Will Shift,&#8221; December 2025,<br>jonesday.com<br><br>[12] IAPP, &#8220;EU Digital Omnibus: Analysis of key changes,&#8221; iapp.org<br><br>[13] Paul Weiss, &#8220;An Uncertain Journey on the Digital Omnibus,&#8221; November 2025, paulweiss.com<br><br>[14] AESIA, &#8220;Practical guides for AI Act compliance,&#8221; published December 17, 2025,<br>aesia.digital.gob.es\/en\/present\/resources\/practical-guides-for-ai-act-compliance<br><br>[15] IAPP, &#8220;AESIA&#8217;s AI Guidelines: Spain steps into the AI spotlight,&#8221; iapp.org<br><br>[16] Covington (Inside Privacy), &#8220;Spain Issues Guidance Under the EU AI Act,&#8221; December 18, 2025,<br>insideprivacy.com<br><br>[17] McKenna Consultants, &#8220;Prepare for EU AI Act high-risk obligations in 2026,&#8221; February 2026,<br>mckennaconsultants.com<br><br>[18] Morgan Lewis, &#8220;The EU AI Act is here, with extraterritorial reach,&#8221; July 2024, morganlewis.com<br><br>[19] Article 99, Regulation (EU) 2024\/1689 (Penalties). Paragraphs 3-7.<br><br>[20] Sprinto, &#8220;GDPR Fines in 2026,&#8221; December 2025, sprinto.com<br><br>[21] simpleanalytics.com, &#8220;GDPR and fines: all there is to know&#8221;<br><br>[22] Finnish Government, &#8220;National supervision of EU Artificial Intelligence Act to begin,&#8221; December 22,<br>2025, valtioneuvosto.fi<br><br>[23] AESIA official website, aesia.digital.gob.es<br><br>[24] Norton Rose Fulbright, &#8220;Italy enacts Law No. 132\/2025 on Artificial Intelligence,&#8221; 2025,<br>nortonrosefulbright.com<br><br>[25] artificialintelligenceact.eu, &#8220;Standard Setting Overview,&#8221; artificialintelligenceact.eu\/standard-setting-overview<br><br>[26] Bird &amp; Bird, &#8220;Introduction to the European Commission&#8217;s Digital Omnibus Package,&#8221; December 2025,<br>twobirds.com<br><br>[27] eyreACT, &#8220;The EU Digital Omnibus Explained: What It Means for EU AI Act Enforcement Dates in<br>2026,&#8221; February 2026, eyreact.com<br><br>[28] appliedAI, &#8220;AI Act: Risk Classification of AI Systems from a Practical Perspective,&#8221; March 2023,<br>appliedai.de. Study of 106 enterprise AI systems. Also referenced in: EU AI Act Compliance Checker at<br>artificialintelligenceact.eu.<br><br>[29] ScienceDirect, &#8220;AI Act high-risk AI compliance challenge and industry impact: A multiple case study,&#8221;<br>published February 2026, sciencedirect.com\/science\/article\/pii\/S095058492600056X<br><br>[30] IAPP, &#8220;MEPs reach preliminary political agreement on AI omnibus,&#8221; published March 12, 2026,<br>iapp.org. Also referenced: European Parliament committees press release, March 11, 2026.<\/p>\n\n\n\n<p>[31] Council of the European Union, &#8220;Council agrees position to streamline rules on Artificial<br>Intelligence,&#8221; press release, March 13, 2026, consilium.europa.eu. Also confirmed by: MLex, Insight EU<br>Monitoring, The Legal Wire.<\/p>\n\n\n\n<p><br><em data--h-bstatus=\"0OBSERVED\">This analysis has been independently cross-validated against the text of Regulation (EU) 2024\/1689,<br>official European Commission publications, the European Parliament&#8217;s legislative train, national<br>government announcements, and legal analyses from international law firms.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>40% of enterprise AI systems can&#8217;t be clearly classified under the EU AI Act. The compliance deadline is in 5 months.<br \/>\nRecent developments in the EU AI Act.<\/p>","protected":false},"author":1,"featured_media":3474,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-3459","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/posts\/3459","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/comments?post=3459"}],"version-history":[{"count":9,"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/posts\/3459\/revisions"}],"predecessor-version":[{"id":3479,"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/posts\/3459\/revisions\/3479"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/media\/3474"}],"wp:attachment":[{"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/media?parent=3459"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/categories?post=3459"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/braynex.ai\/de\/wp-json\/wp\/v2\/tags?post=3459"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}