Three weeks ago we told UK businesses the EU AI Act was four months away and most of them were not ready. The post landed harder than expected. Inboxes lit up. Legal teams forwarded it round with “see, told you” notes attached. Boards asked for briefings. A few readers got in touch to argue the deadline would slip anyway, so why panic.
Then the news cycle moved on. It always does. And the inconvenient reality — that 2 August 2026 is roughly 100 days away, that it is not moving, and that most UK businesses are still not ready — has been buried under three weeks of Brussels briefings, industry lobbying letters, and a fresh round of “the Act is being watered down” speculation.
This is the update. What has actually moved. What has not. And why, if anything, the picture in late April makes the three-week-old plan more urgent, not less.
The Delay Campaign Is Real. Betting On It Is Not A Strategy.
Let us start with the thing everyone wants to be true. Industry coalitions, several national governments, and a reliable chorus of commentators have spent the last several months arguing that the August 2026 high-risk deadline should slip. The arguments are familiar: standards are not ready, guidance is late, the compliance burden is crushing, Europe is falling behind on AI investment, the Draghi report said so.
Some of those arguments are credible. Most are self-interested. The important thing to notice is the Commission’s posture through all of it: calm, unmoved, and pointed. Officials have publicly restated that the timeline stands. Where pressure has produced movement, it has been on simplification of reporting and documentation burdens, not on postponement of the substantive obligations. Those are very different things.
Here is what to actually expect from the delay campaign over the next 100 days:
- More lobbying letters. Trade bodies will keep writing them. They will get polite responses. Some narrow, technical elements may get re-scoped.
- Louder political noise from certain member states. Some capitals are genuinely uncomfortable with the Act’s prescriptive design. The Council can grumble. It cannot unilaterally move a deadline baked into an already-enacted regulation.
- Targeted extensions, not blanket ones. If any adjustment lands before August, it will almost certainly be narrow — an extended transition for specific high-risk categories where harmonised standards are demonstrably not ready, or a lighter reporting cadence for SMEs. Not a wholesale pause.
- No relief for UK businesses. Even in the unlikely event of a broader postponement, your EU customers do not give you a postponement. Your procurement teams in Dublin and Amsterdam will still ask for your conformity documentation on the date they always intended to.
Planning for an EU AI Act delay is not a strategy. It is a hope, and a fairly weak one. The businesses that treat it as a strategy will be the ones still building governance in September, under pressure, at four times the cost.
Build for the deadline that is on the statute book. If relief shows up, you will have wasted a bit of effort. If it does not, you will have a functioning governance framework while your competitors are improvising.
The Simplification Package: What It Actually Changes
The noise about a “watered-down” AI Act mostly traces back to the Commission’s broader digital simplification agenda — an effort to reduce the paperwork burden sitting on top of GDPR, the Data Act, the AI Act, and the Data Governance Act all at once. It is a real initiative. It is also not what most commentators think it is.
Here is the practical shape of it, as far as it has landed:
- Documentation templates, not exemptions. The bulk of simplification is about standardising the format of technical documentation, risk assessments, and post-market monitoring, so providers are not inventing the wheel from scratch. This is a gift. It is not a reduction in what you have to do.
- Clearer SME pathways. Smaller businesses are getting lighter conformity paths for some obligations, plus reduced fees for accessing regulatory sandboxes. If you are under the SME threshold, dig into this. If you are not, it does not help you.
- A single entry point for reporting. The Commission is pushing towards a unified portal where providers can register high-risk systems, submit documentation, and handle serious incident reporting in one place rather than through multiple national authorities. Useful plumbing. Changes nothing about whether you are in scope.
- No change to risk tiers, no change to prohibitions, no change to the deadline. The things that actually decide whether you are compliant on 2 August remain untouched.
If you read a headline this month claiming the EU AI Act is being “softened,” read the actual policy document it cites. Nine times out of ten the softening is a template you can download, not a rule you can ignore.
Harmonised Standards: Still Behind
This is the most consequential real problem, and it gets the least coverage. CEN-CENELEC’s JTC 21 technical committee has been producing the harmonised standards that turn the Act’s general requirements into specific, testable technical criteria. Standards on risk management, data quality, transparency, robustness, cybersecurity, and quality management systems all sit in this package.
The standards are not all done. Drafts have been published for some areas; others are still in working group revisions; a few have slipped into revision cycles that will almost certainly not conclude before August. The Commission is aware. The Commission’s position is roughly: where harmonised standards are not yet available, providers can still demonstrate conformity through alternative means, and the AI Office will publish guidance where it needs to.
In practice, this creates two compliance modes right now:
- Where a harmonised standard exists and is close to final — design against it. It is the cleanest conformity presumption available.
- Where no harmonised standard exists — document against the Act’s requirements directly, using Commission guidance, ISO/IEC 42001, ISO/IEC 23894, and NIST’s AI RMF as scaffolding. It is more work and more judgement. It is still compliance.
Do not wait for the standards package to complete before starting. The providers who wait will have less time, not more — and the late-arriving standards will still require design work against them when they land.
Guidance Is Dropping. Templates Are Not Enough.
The AI Office has been shipping guidance at a steady cadence through the first part of 2026: clarifications on the scope of prohibited practices, notes on provider-versus-deployer boundaries, direction on high-risk classification for borderline systems, and refreshed material on the AI literacy obligations that already took effect last February.
None of this is optional reading for a governance lead. Two specific pieces deserve attention right now:
- Provider vs deployer is the most misunderstood distinction in the Act. If you fine-tune a third-party model, you may have stepped into provider obligations whether you realise it or not. If you buy a high-risk AI system off the shelf and deploy it, you still have deployer obligations, which are meaningful. Most UK businesses are deployers and do not yet know what that means for them.
- AI literacy under Article 4 is already in force. It has been since February 2025. If you cannot currently demonstrate that your staff who use AI have received proportionate training, you are already in breach on that dimension — regardless of whether you have high-risk systems or not. This is the single cheapest, fastest compliance action you can take, and almost no one has taken it.
The Plumbing Problem: Notified Bodies and National Authorities
One reason the delay campaign has traction is the enforcement infrastructure is still being built while the deadline approaches. Two pieces matter:
Notified bodies. For certain high-risk AI categories, conformity assessment has to be performed by an accredited third-party body, not self-declared. Those bodies are being designated across member states, but capacity is genuinely tight. If your systems fall into a category that requires third-party assessment, do not assume you can book a slot in July 2026. Queues are forming now, and the best bodies will be oversubscribed well before summer.
National competent authorities. Each member state has to name the authorities responsible for market surveillance and enforcement in their jurisdiction. Most have done so; a handful have been late. For a UK business, what this means practically is that if an EU customer challenges your system, the authority you will deal with depends on which market the challenge originates in. Knowing which authority will regulate your system in your top three EU markets is a two-hour research task. Do it.
Signals From the GPAI Tier
The general-purpose AI provider obligations have been in force since August 2025. We now have roughly eight months of evidence about how the AI Office actually engages with providers, and the signals are instructive for anyone preparing for the high-risk tier:
- The Code of Practice is treated as close to mandatory. Providers who signed on are enjoying a smoother regulatory relationship. Providers who did not are not being ignored — they are being asked to demonstrate equivalent compliance through their own evidence, which is more work, not less. The informal message to high-risk providers: whatever codes of conduct emerge for your sector, signing them will probably be the path of least friction.
- Documentation quality matters more than documentation volume. The AI Office has shown no patience for hundred-page risk assessments that do not actually analyse risks. A shorter, sharper, genuinely reasoned document beats a long ceremonial one every time.
- Transparency obligations have teeth. The enforcement dialogue has repeatedly centred on training data documentation, incident reporting, and user-facing disclosures — exactly the areas high-risk providers will need to have in place.
If you want a preview of how the high-risk tier will be enforced, watch how the GPAI tier has been enforced. Style, not just substance, matters.
The UK Picture Has Shifted Slightly
When we wrote v.1, we described Westminster’s approach as principles-based, sector-led, and unfinished. That description is still broadly right, with two important updates:
- The Data (Use and Access) Act is on the statute book. It reforms aspects of automated decision-making under UK GDPR and adjusts some of the rules around research and data re-use. It does not remove the need for most businesses to comply with the EU AI Act in parallel. It does move UK data protection practice in a direction that is mostly, but not entirely, compatible with EU expectations. Your DPO and your AI governance lead need to have read it.
- The UK AI Bill has started to move. A more substantive UK AI regulation bill is now progressing, with the stated aim of giving the AI Safety Institute (now the AI Security Institute) statutory footing and introducing targeted obligations for the largest foundation model developers. This is narrower than the EU AI Act by design. For most UK businesses, it will not significantly change their compliance surface — but the direction of travel is clear: Westminster is moving towards more rules, not fewer.
The dual-compliance reality from the original post has, if anything, sharpened. A year from now, UK businesses with EU exposure will be running against the EU AI Act, targeted UK foundation-model rules, sector regulator guidance, and reformed UK GDPR simultaneously. Building one governance framework that covers all of them is still the only sensible answer.
The 100-Day Plan (Revised)
The six-step plan from the original post still holds. Here is what changes when you run it in late April rather than early April.
Week 1 (this week): Finish the audit if you have not started it
If you still do not have a complete register of the AI systems touching your business — including every embedded AI feature your SaaS vendors quietly enabled in the last two quarters — you are already behind. This is no longer a nice-to-have. It is the foundation of every subsequent compliance action. A focused cross-functional week gets it done.
Weeks 2–3: Classify, and book your notified-body conversations
Risk-tier every system. For anything that lands in high-risk territory and requires third-party assessment, contact notified bodies now. Do not assume availability in July. If your preferred body cannot take you on, you want to know in April, not in June.
Weeks 4–6: Close the AI literacy gap
Design and roll out proportionate AI literacy training. For most organisations, this is a blend of general-audience content (what AI is, what the Act requires, what employees should and should not do) and role-specific training for people who build, procure, or deploy AI systems. This is already an obligation, it is relatively cheap, and it buys you defensibility on an article that the AI Office is watching.
Weeks 7–10: Technical documentation and human oversight
For every high-risk system, produce the technical documentation package: data provenance and quality, intended purpose, risk assessment, testing and validation, known limitations, monitoring plan. Design and implement the human oversight controls that will actually sit in the production workflow — not a ceremonial approval button, real review.
Weeks 11–13: Post-market monitoring and incident reporting
Before the deadline, you need a live post-market monitoring capability and a serious-incident reporting process. This is the ongoing operational commitment the Act demands. The businesses that leave this to August will be trying to design monitoring infrastructure while simultaneously running it, which rarely ends well.
Week 14 onwards: Governance as a running capability
By early August, AI governance should no longer be a project. It should be a function — an assigned owner, a working register, a live monitoring capability, a quarterly review cadence, and a clear escalation path to the board. The Act’s requirements do not end on 2 August. That is when they start being enforceable.
The Honest Version
Most of what we said three weeks ago still holds. The Act is not softening. The deadline is not slipping in any way that helps you. The UK is not going to produce an equivalent framework that lets you sidestep it. The businesses that invest in governance now are still the ones that will be deploying AI faster and more confidently than their competitors by the autumn.
What is different is that you have fewer days. Three weeks ago we said four months. Today it is just over three. By the time the next update in this series lands, it will be closer to two. The scope of what can be done well inside a compressed timeline narrows with every passing week. The scope of what costs four times what it should to do in a panic grows.
The practical ask has not changed. Start the audit this week if you have not. Classify your systems. Open conversations with notified bodies if third-party assessment is on your horizon. Close your AI literacy gap. Assign accountability. Build the register. Document ruthlessly. Design for oversight.
The regulators are not going to be impressed by how surprised you were.
100 days is enough time, if you start now.
Digital by Default helps UK businesses run the full EU AI Act readiness programme — audit, classification, documentation, oversight design, and governance stand-up — at a pace that still hits August. Talk to us before the queue forms.
Get in Touch