EU AI Act update: the SME fine cap, the Omnibus wobble, and Ireland's 15 regulators
A specialist update to our March piece. What actually ships on 2 August 2026, the Article 99(6) carve-out for SMEs that most coverage misses, and Ireland's real status.

Our sister site covered the EU AI Act in March, aimed at a business audience still deciding whether any of this applied to them. Five weeks later the picture has moved enough that a specialist update is warranted - and one number in the original piece is now misleading for the readers we talk to most.
This post is for founders, CTOs, and product owners in Ireland, the UK, and the EU who are building AI into real products. It covers four things: what specifically becomes enforceable on 2 August 2026, the SME fine carve-out that almost every general-audience summary misses, the Digital Omnibus that may or may not change the rules six days after this post ships, and where Ireland actually stands on enforcement.
The Article 99(6) carve-out that changes the fine number
The March piece used the headline figure - up to 7% of worldwide turnover or €35m - and stopped there. That is the fine tier for prohibited practices under Article 5, and it is technically accurate. It is also the wrong number for most of our audience.
Article 99 defines three penalty tiers:
- 7% of worldwide turnover or €35m - whichever is higher - for breaches of Article 5 (prohibited practices)
- 3% or €15m for most operator breaches, including failure to register a high-risk system or inadequate risk management
- 1% or €7.5m for supplying misleading information to authorities
The SME carve-out that almost everyone quotes wrong
Article 99(6) then flips the rule for small and medium-sized enterprises and start-ups. For SMEs, the fine is capped at the lower of the two figures, not the higher. A three-person start-up that trips Article 5 is not exposed to €35m. It is exposed to 7% of its turnover, whichever is lower - which on a €500k revenue company is €35,000.
This matters for three reasons. First, the "you will be bankrupted" framing that dominates vendor marketing is simply false for almost every Appify client. Second, the relative cost of compliance work versus the plausible downside is very different when the downside is €35,000 instead of €35m. Third, "we're too small for this to matter" is still the wrong conclusion - the smaller ceiling does not relieve you of Article 5 prohibitions, Article 50 transparency duties, or Annex III high-risk obligations if you have them. It just changes the shape of the worst case.
Compliance cost is the real number to plan around
The Commission's own cited compliance-cost range tells the other half of the story. The 2021 Impact Assessment estimated roughly €6-7k per high-risk system for an SME. Industry bodies, post-implementation, now cluster compliance estimates between €200k and €500k initial investment for a quality-management system serious enough to pass third-party conformity assessment. The real SME risk is not the fine. It is the cost of getting to a defensible posture at all.
What actually becomes enforceable on 2 August 2026
On 2 August 2024 the Act entered into force. On 2 February 2025 the Article 5 prohibitions became applicable. On 2 August 2025, GPAI obligations under Article 53 became applicable for new general-purpose models.
On 2 August 2026, the following become enforceable:
- Annex III high-risk system obligations - Articles 8-15, covering risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy/robustness/cybersecurity
- Article 50 transparency rules - if your system interacts with users as a chatbot, generates or manipulates content that could be mistaken for authentic, or applies emotion-recognition or biometric categorisation, users must be told
- Full national-enforcement powers for Member State authorities
- Article 57(1) regulatory sandbox requirement - each Member State must have at least one operational AI sandbox
If you ship a product that screens CVs, scores credit, triages medical complaints, moderates user content at scale, handles hiring decisions, or operates as a safety component in a regulated product - read Annex III closely. "High risk" is a legal classification, not a marketing one.
Interpretive guidance is late
The Commission missed its own 2 February 2026 deadline under Article 96 for high-risk classification guidelines. Adoption is now expected March or April 2026, which means the official interpretive guidance arrives uncomfortably close to the obligations themselves. The final Article 50 Code of Practice on AI-content labelling is expected in June.
The Digital Omnibus: what just happened, and what changes next week
On 19 November 2025 the Commission proposed the Digital Omnibus on AI. It does not repeal the high-risk regime; it conditions the start date on whether the supporting harmonised standards and conformity-assessment tools are actually ready, and it fixes a backstop: 2 December 2027 for stand-alone Annex III systems and 2 August 2028 for embedded high-risk systems. Article 5 prohibitions, Article 50 transparency, and GPAI obligations are explicitly not deferred.
The Council adopted its general approach on 13 March 2026. Parliament followed on 26 March with a 569-45 vote (with 23 abstentions). Trilogue began that same day. The next political trilogue is scheduled for 28 April 2026 - six days after this post publishes.
Two implications that matter right now
Two implications matter for anyone reading this now.
The first is that the 2 August 2026 deadline is still the law. Until the Omnibus is adopted and published in the Official Journal of the European Union, the current dates remain in force. Planning on the basis of the proposed delay is a category mistake. A project team that picks up in September and discovers the trilogue fell apart is in a worse position than a team that worked to the original timeline and got a welcome breather.
The second is that civil society is organised against it. More than fifty European digital-rights organisations - including EDRi, BEUC, CDT Europe, and ECNL - have publicly opposed the weakening of transparency, particularly the proposed carve-outs to the high-risk registration obligations. Whether the Omnibus passes in its current form is genuinely unclear. We do not yet know what the 28 April trilogue produces.
We will revisit this post once the trilogue outcome is published. If you are building to this Act and want us to flag material changes the moment they land, talk to us on /services/ai-consulting - we maintain this tracking internally for our own clients.
Ireland: 15 sectoral regulators, no National AI Office yet
"Ireland has not named a regulator" is the shorthand in circulation. It is wrong.
On 16 September 2025, Minister for Enterprise, Tourism and Employment Peter Burke confirmed the nomination of 15 national competent authorities under Article 70. The model is sectoral rather than monolithic - the Central Bank for financial-services AI, the DPC for privacy-adjacent AI, CCPC for consumer-protection cases, ComReg for communications, Coimisiún na Meán for content platforms, HPRA for medical devices, HSE for health services, HSA for workplace AI, WRC for employment, CRU for utilities, CRR for railway, NTA for transport, Marine Survey Office for maritime, and two ministerial-level designees for catch-all cases. On the same day the National AI Implementation Committee convened for the first time. By the count that matters under the Regulation - Member States that designated their Article 70 authorities before the deadline - Ireland is among the first to finish.
The AI Office is still in legislation
What Ireland does not yet have is the standing National AI Office, Oifig Intleachta Shaorga na hÉireann. That body is created by the General Scheme of the Regulation of Artificial Intelligence Bill 2026, published by the Department of Enterprise earlier this year. Its role is to act as the Single Point of Contact under the Act, coordinate the 15 market-surveillance authorities, maintain national registers, and promote AI literacy.
The statutory deadline is 1 August 2026 - one calendar day before the EU AI Act's own Annex III deadline. That is a tight legislative schedule. The practical consequence is this: the sectoral regulators already exist and already have enforcement powers they can exercise from 2 August 2026 onwards. The central AI Office may or may not be stood up in time. If you are an Irish deployer of a high-risk AI system, the question to answer now is which sectoral regulator has jurisdiction over your use case - not whether Ireland has a regulator at all. For most commercial deployments, it is the DPC, the Central Bank, the CCPC, or ComReg, and the answer depends on what your AI is doing and to whom.
UK and Northern Ireland
For UK-based companies and Northern Ireland specifically: the UK has no standalone AI statute as of April 2026 and remains sector-led under its existing regulators. But Article 2's extraterritorial scope means any UK or Irish company with EU customers is captured by the Act regardless. Northern Ireland under the Windsor Framework is likely to align with significant parts of the Act, giving NI SMEs a single-market-facing compliance footprint their GB peers do not share. That is a commercial asset, not just a regulatory one.
What to actually do before 2 August
If you are building AI into a product, four actions are load-bearing before the deadline, regardless of how the Omnibus trilogue resolves.
-
Classify your systems honestly. Not every AI system is high risk, and vendor-marketing tends to under-classify. If your system makes decisions about people - hiring, credit, eligibility, triage - read Annex III. If you are a deployer of a third-party system that meets those criteria, you inherit duties; you do not get to point at the provider.
-
Put a governance skeleton in place. For high-risk systems this means a documented risk-management process, data governance that can survive an audit, technical documentation aligned with Annex IV, human-oversight design that is real rather than performative, and logging sufficient to trace a decision back to inputs. Most of our clients already have pieces of this; almost none have it assembled into something a market-surveillance authority would accept.
-
Handle Article 50 transparency now. The obligations for chatbots, AI-generated content, and emotion recognition do not depend on Annex III classification. They apply independently. A "this is an AI assistant" disclosure in your chat UI is one afternoon of work, and it is defensible without a quality-management system. Do it.
-
Budget the compliance work realistically. We think the €200-500k industry estimate is roughly right for a deployer of a high-risk system building internal capability from scratch. It is lower if you use existing GRC infrastructure. The €6-7k Commission figure is not realistic for a serious deployment. Plan to the higher number or plan not to be high-risk.
Our internal posture at Appify - on both our own AI products and our clients' - is that human-in-the-loop and guardrails carry the weight. The Act effectively codifies that posture in law for a specific slice of systems. For everything else, it remains the default because productivity without oversight is easier than profit with it.
If you would like us to map your AI systems against the Act, or to help build the governance infrastructure before the 2 August deadline, book a call. Our AI consulting and RAG systems practices track the regulatory movement weekly for our own clients and can tell you exactly where your product sits on the risk ladder.
Tagged